Retention In Vision

What are Retention Networks

Retentive Network (RetNet) is a foundational architecture proposed for large language models in the paper Retentive Network: A Successor to Transformer for Large Language Models. This architecture is designed to address key challenges in the realm of large-scale language modeling: training parallelism, low-cost inference, and good performance.

LLM Challenges RetNet is able to tackle these challenges by introducing the Multi-Scale Retention (MSR) mechanism, which is an alternative to the multi-head attention mechanism commonly used in Transformer models. MSR has a dual form of recurrence and parallelism, so it is possible to train the models in a parallel way while recurrently conducting inference. We will explore RetNet in detail in the later chapter.

The Multi-Scale Retention mechanism operates under three computation paradigms:

During the training phase, the approach incorporates both parallel and chunkwise recurrent representations, optimizing GPU usage for fast computation and being particularly effective for long sequences in terms of computational efficiency and memory use. For the inference phase, the recurrent representation is used, favoring autoregressive decoding. This method efficiently reduces memory usage and latency, maintaining equivalent performance outcomes.

From Language to Image

RMT

The paper RMT: Retentive Networks Meet Vision Transformers proposes a new vision backbone inspired by the RetNet architecture. The authors propose RMT to enhance the Vision Transformer (ViT) by introducing explicit spatial priors and reducing computational complexity, drawing inspiration from the RetNet’s parallel representation. This includes adapting the RetNet’s temporal decay to spatial domains and using a Manhattan distance-based spatial decay matrix, along with an attention decomposition form, to improve efficiency and scalability in vision tasks.

However, unlike the original RetNet, which performs training with parallel representation and inference with recurrent representation, RMT does both processes with the MaSA mechanism. The authors have made comparisons between MaSA and other RetNet’s representations, and they show that MaSA has the best throughput with the highest accuracy. MaSA vs Retention

ViR

ViR

Another work inspired by the RetNet architecture is the ViR, as discussed in the paper ViR: Vision Retention Networks. In this architecture, the authors propose a general vision backbone with a redesigned retention mechanism. They demonstrate that ViR can scale favorably to larger image resolutions in terms of image throughput and memory consumption by leveraging the dual parallel and recurrent properties of the retentive network.

The overall architecture of ViR is quite similar to that of ViT, except that it replaces the Multi-Head Attention (MHA) with Multi-Head Retention (MHR). This MHR mechanism is free of any gating function and can be switched between parallel, recurrent, or chunkwise (a hybrid between parallel and recurrent) modes. Another difference in ViR is that the positional embedding is first added to the patch embedding, and then the [class] token is appended.

Further Reading