id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
2312.00752#14
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
5 Copying Output noo am > mt HE nee Tt Solution # Tetons | # oO S lective Copying # aoe # i) # [coe # Induction Heads # EES > # fo Perfectly solved by LTI (e.g. convolutional) models that do not need to look at the actual inputs Hi i Hl ] Bw H a H > BH Figure 2: (Left) The standard version of the Copying task involves constant spacing between input and output elements and is easily solved by time-invariant models such as linear recurrences and global convolutions. (Right Top) The Selective Copying task has random spacing in between inputs and requires time-varying models that can selectively remember or ignore inputs depending on their content. (Right Bottom) The Induction Heads task is an example of associative recall that requires retrieving an answer based on context, a key ability for LLMs. Algorithm 2 SSM + Selection (S6) Input: ð ¥ â ¶ (ð ±, ð », ð ³) Output: ð ¦ â ¶ (ð ±, ð », ð ³) 1: A â ¶ (ð ³, ð ½) â ð ¯ð ºð ð ºð ð ¾ð ð ¾ð â ³ Represents structured ð à ð matrix â ³ Represents structured ð à ð matrix 2: B â ¶ (ð ³, ð ½) â ð ¯ð ºð ð ºð ð ¾ð ð ¾ð 3: C â ¶ (ð ³, ð ½) â ð ¯ð ºð ð ºð ð ¾ð ð ¾ð 4: â â ¶ (ð ³) â ð â (ð ¯ð ºð ð ºð ð ¾ð ð ¾ð ) 5: A, B â ¶ (ð ³, ð ½) â ð ½ð ð ð ¼ð ð ¾ð ð ð ð ¾(â , A, B) 6: ð ¦ â ð ²ð ²ð ¬(A, B, C)(ð ¥) 2: B â ¶ (ð ±, ð », ð ½) â ð ð µ(ð ¥) 3: C â ¶ (ð ±, ð », ð ½) â ð ð ¶(ð ¥) 4: â â ¶ (ð ±, ð », ð ³) â ð â
2312.00752#13
2312.00752#15
2312.00752
[ "2302.13971" ]
2312.00752#15
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
(ð ¯ð ºð ð ºð ð ¾ð ð ¾ð +ð â (ð ¥)) 5: A, B â ¶ (ð ±, ð », ð ³, ð ½) â ð ½ð ð ð ¼ð ð ¾ð ð ð ð ¾(â , A, B) 6: ð ¦ â ð ²ð ²ð ¬(A, B, C)(ð ¥) â ³ Time-invariant: recurrence or convolution â ³ Time-varying: recurrence (scan) only 7: return ð ¦ 7: return ð ¦
2312.00752#14
2312.00752#16
2312.00752
[ "2302.13971" ]
2312.00752#16
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Algorithms 1 and 2 illustrates the main selection mechanism that we use. The main diï¬ erence is simply making several parameters â , B, C functions of the input, along with the associated changes to tensor shapes throughout. In particular, we highlight that these parameters now have a length dimension ð ¿, meaning that the model has changed from time-invariant to time-varying. (Note that shape annotations were described in Section 2). This loses the equivalence to convolutions (3) with implications for its eï¬ ciency, discussed next. We speciï¬ cally choose ð ð µ(ð ¥) = ð «ð ð ð ¾ð ºð ð (ð ¥), ð ð ¶(ð ¥) = ð «ð ð ð ¾ð ºð ð (ð ¥), ð â (ð ¥) = ð ¡ð ð ð ºð ½ð ¼ð ºð ð ð ·(ð «ð ð ð ¾ð ºð 1(ð ¥)), and ð â = ð ð ð ¿ð ð ð ð ð , where ð «ð ð ð ¾ð ºð ð is a parameterized projection to dimension ð . The choice of ð â and ð â
2312.00752#15
2312.00752#17
2312.00752
[ "2302.13971" ]
2312.00752#17
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
is due to a connection to RNN gating mechanisms explained in Section 3.5. # 3.3 Efficient Implementation of Selective SSMs Hardware-friendly architectures such as convolutions (Krizhevsky, Sutskever, and Hinton 2012) and Transform- ers (Vaswani et al. 2017) enjoy widespread application. Here we aim to make selective SSMs eï¬ cient on modern hardware (GPU) as well. The selection mechanism is quite natural, and earlier works attempted to incorporate special cases of selection, such as letting â vary over time in recurrent SSMs (Gu, Dao, et al. 2020). However, as previously mentioned a core limitation in the usage of SSMs is their computational eï¬ ciency, which was why S4 and all derivatives used LTI (non-selective) models, most commonly in the form of global convolutions. # 3.3.1 Motivation of Prior Models
2312.00752#16
2312.00752#18
2312.00752
[ "2302.13971" ]
2312.00752#18
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
We ï¬ rst revisit this motivation and overview our approach to overcome limitations of prior methods. â ¢ At a high level, recurrent models such as SSMs always balance a tradeoï¬ between expressivity and speed: as discussed in Section 3.1, models with larger hidden state dimension should be more eï¬ ective but slower. Thus 6 we want to maximize hidden state dimension without paying speed and memory costs. â ¢ Note that the recurrent mode is more ï¬ exible than the convolution mode, since the latter (3) is derived from expanding the former (2) (Gu, Goel, and Ré 2022; Gu, Johnson, Goel, et al. 2021). However, this would require computing and materializing the latent state â with shape (ð ±, ð », ð ³, ð ½), much larger (by a factor of ð , the SSM state dimension) than the input ð ¥ and output ð ¦ of shape (ð ±, ð », ð ³). Thus the more eï¬ cient convolution mode was introduced which could bypass the state computation and materializes a convolution kernel (3a) of only (ð ±, ð », ð ³). â ¢ Prior LTI SSMs leverage the dual recurrent-convolutional forms to increase the eï¬ ective state dimension by a factor of ð (â 10 â 100), much larger than traditional RNNs, without eï¬ ciency penalties. # 3.3.2 Overview of Selective Scan: Hardware-Aware State Expansion The selection mechanism is designed to overcome the limitations of LTI models; at the same time, we therefore need to revisit the computation problem of SSMs. We address this with three classical techniques: kernel fusion, parallel scan, and recomputation. We make two main observations:
2312.00752#17
2312.00752#19
2312.00752
[ "2302.13971" ]
2312.00752#19
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
â ¢ The naive recurrent computation uses ð (ð µð ¿ð ·ð ) FLOPs while the convolutional computation uses ð (ð µð ¿ð · log(ð ¿)) FLOPs, and the former has a lower constant factor. Thus for long sequences and not-too-large state dimension ð , the recurrent mode can actually use fewer FLOPs. â ¢ The two challenges are the sequential nature of recurrence, and the large memory usage. To address the latter, just like the convolutional mode, we can attempt to not actually materialize the full state â . The main idea is to leverage properties of modern accelerators (GPUs) to materialize the state â only in more eï¬ cient levels of the memory hierarchy. In particular, most operations (except matrix multiplication) are bounded by memory bandwidth (Dao, Fu, Ermon, et al. 2022; Ivanov et al. 2021; Williams, Waterman, and Patterson 2009). This includes our scan operation, and we use kernel fusion to reduce the amount of memory IOs, leading to a signiï¬ cant speedup compared to a standard implementation. Concretely, instead of preparing the scan input (A, B) of size (ð ±, ð », ð ³, ð ½) in GPU HBM (high-bandwidth memory), we load the SSM parameters (â , A, B, C) directly from slow HBM to fast SRAM, perform the discretization and recurrence in SRAM, and then write the ï¬ nal outputs of size (ð ±, ð », ð ³) back to HBM. To avoid the sequential recurrence, we observe that despite not being linear it can still be parallelized with a work-eï¬ cient parallel scan algorithm (Blelloch 1990; Martin and Cundy 2018; Smith, Warrington, and Linderman 2023). Finally, we must also avoid saving the intermediate states, which are necessary for backpropagation. We carefully apply the classic technique of recomputation to reduce the memory requirements: the intermediate states are not stored but recomputed in the backward pass when the inputs are loaded from HBM to SRAM. As a result, the fused selective scan layer has the same memory requirements as an optimized transformer implementation with FlashAttention. Details of the fused kernel and recomputation are in Appendix D.
2312.00752#18
2312.00752#20
2312.00752
[ "2302.13971" ]
2312.00752#20
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
The full Selective SSM layer and algorithm is illustrated in Figure 1. # 3.4 A Simplified SSM Architecture As with structured SSMs, selective SSMs are standalone sequence transformations that can be ï¬ exibly incorporated into neural networks. The H3 architecture is the basis for the most well-known SSM architectures (Section 2), which are generally comprised of a block inspired by linear attention interleaved with an MLP (multi-layer perceptron) block. We simplify this architecture by combining these two components into one, which is stacked homogenously (Figure 3). This is inspired by the gated attention unit (GAU) (Hua et al. 2022), which did something similar for attention. This architecture involves expanding the model dimension ð · by a controllable expansion factor ð ¸. For each block, most of the parameters (3ð ¸ð ·2) are in the linear projections (2ð ¸ð ·2 for input projections, ð ¸ð ·2 for output projection) while the inner SSM contributes less. The number of SSM parameters (projections for â , B, C, and 7 Linear projection Sequence transformation Nonlinearity (activation multiplication) H3 ®@ Gated MLP â Mamba # or Figure 3: (Architecture.) Our simplified block design combines the H3 block, which is the basis of most SSM architectures, with the ubiquitous MLP block of modern neural networks. Instead of interleaving these two blocks, we simply repeat the Mamba block homogenously. Compared to the H3 block, Mamba replaces the first multiplicative gate with an activation function. Compared to the MLP block, Mamba adds an SSM to the main branch.
2312.00752#19
2312.00752#21
2312.00752
[ "2302.13971" ]
2312.00752#21
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
For ð we use the SiLU / Swish activation (Hendrycks and Gimpel 2016; Ramachandran, Zoph, and Quoc V Le 2017). the matrix A) are much smaller in comparison. We repeat this block, interleaved with standard normalization and residual connections, to form the Mamba architecture. We always ï¬ x to ð ¸ = 2 in our experiments and use two stacks of the block to match the 12ð ·2 parameters of a Transformerâ s interleaved MHA (multi-head attention) and MLP blocks. We use the SiLU / Swish activation function (Hendrycks and Gimpel 2016; Ramachandran, Zoph, and Quoc V Le 2017), motivated so that the Gated MLP becomes the popular â SwiGLUâ variant (Chowdhery et al. 2023; Shazeer 2020; Touvron et al. 2023). Finally, we additionally use an optional normalization layer (we choose LayerNorm (J. L. Ba, Kiros, and Hinton 2016)), motivated by RetNetâ s usage of a normalization layer in a similar location (Y. Sun et al. 2023). # 3.5 Properties of Selection Mechanisms The selection mechanism is a broader concept that can be applied in diï¬ erent ways, such as to more traditional RNNs or CNNs, to diï¬ erent parameters (e.g. A in Algorithm 2), or using diï¬ erent transformations ð (ð ¥).
2312.00752#20
2312.00752#22
2312.00752
[ "2302.13971" ]
2312.00752#22
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
# 3.5.1 Connection to Gating Mechanisms We highlight the most important connection: the classical gating mechanism of RNNs is an instance of our selection mechanism for SSMs. We note that the connection between RNN gating and the discretization of continuous-time systems is well established (Funahashi and Nakamura 1993; Tallec and Ollivier 2018). In fact, Theorem 1 is an improvement of Gu, Johnson, Goel, et al. (2021, Lemma 3.1) generalizing to the ZOH discretization and input-dependent gates (proof in Appendix C). More broadly, â in SSMs can be seen to play a generalized role of the RNN gating mechanism. In line with prior work, we adopt the view that discretization of SSMs is the principled foundation of heuristic gating mechanisms. Theorem 1. When ð = 1, A = â 1, B = 1, ð â = ð «ð ð ð ¾ð ºð (ð ¥), and ð â = ð ð ð ¿ð ð ð ð ð
2312.00752#21
2312.00752#23
2312.00752
[ "2302.13971" ]
2312.00752#23
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
, then the selective SSM recurrence (Algorithm 2) takes the form ð ð ¡ = ð (ð «ð ð ð ¾ð ºð (ð ¥ð ¡)) â ð ¡ = (1 â ð ð ¡)â ð ¡â 1 + ð ð ¡ð ¥ð ¡. (5) As mentioned in Section 3.2, our speciï¬ c choices of ð â , ð â is from this connection. In particular, note that if a given input ð ¥ð ¡ should be completely ignored (as necessary in the synthetic tasks), all ð · channels should ignore it, and so we project the input down to 1 dimension before repeating/broadcasting with â . 8
2312.00752#22
2312.00752#24
2312.00752
[ "2302.13971" ]
2312.00752#24
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
# Interpretation of Selection Mechanisms We elaborate on two particular mechanistic eï¬ ects of selection. Variable Spacing. Selectivity allows ï¬ ltering out irrelevant noise tokens that may occur between inputs of interest. This is exempliï¬ ed by the Selective Copying task, but occurs ubiquitously in common data modalities, particularly for discrete data â for example the presence of language ï¬ llers such as â umâ . This property arises because the model can mechanistically ï¬ lter out any particular input ð ¥ð ¡, for example in the gated RNN case (Theorem 1) when ð ð ¡ â 0.
2312.00752#23
2312.00752#25
2312.00752
[ "2302.13971" ]
2312.00752#25
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
It has been empirically observed that many sequence models do not improve with longer Filtering Context. context (F. Shi et al. 2023), despite the principle that more context should lead to strictly better performance. An explanation is that many sequence models cannot eï¬ ectively ignore irrelevant context when necessary; an intuitive example are global convolutions (and general LTI models). On the other hand, selective models can simply reset their state at any time to remove extraneous history, and thus their performance in principle improves monotonicly with context length (e.g. Section 4.3.2). In settings where multiple independent sequences are stitched together, Transformers Boundary Resetting. can keep them separate by instantiating a particular attention mask, while LTI models will bleed information between the sequences. Selective SSMs can also reset their state at boundaries (e.g. â
2312.00752#24
2312.00752#26
2312.00752
[ "2302.13971" ]
2312.00752#26
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
ð ¡ â â or Theorem 1 when ð ð ¡ â 1). These settings may occur artiï¬ cially (e.g. packing documents together to improve hardware utilization) or naturally (e.g. episode boundaries in reinforcement learning (Lu et al. 2023)). Additionally, we elaborate on eï¬ ects of each selective parameter. In general, â controls the balance between how much to focus or ignore the current input Interpretation of â . ð ¥ð ¡. It generalizes RNN gates (e.g. ð ð ¡ in Theorem 1), mechanically, a large â resets the state â and focuses on the current input ð ¥, while a small â persists the state and ignores the current input. SSMs (1)-(2) can be interpreted as a continuous system discretized by a timestep â , and in this context the intuition is that large â â â represents the system focusing on the current input for longer (thus â selectingâ it and forgetting its current state) while a small â â 0 represents a transient input that is ignored. Interpretation of A. We remark that while the A parameter could also be selective, it ultimately aï¬ ects the model only through its interaction with â via A = exp(â A) (the discretization (4)). Thus selectivity in â is enough to ensure selectivity in (A, B), and is the main source of improvement. We hypothesize that making A selective in addition to (or instead of) â would have similar performance, and leave it out for simplicity. Interpretation of B and C. As discussed in Section 3.1, the most important property of selectivity is ï¬ ltering out irrelevant information so that a sequence modelâ s context can be compressed into an eï¬ cient state. In an SSM, modifying B and C to be selective allows ï¬ ner-grained control over whether to let an input ð ¥ð ¡ into the state â ð ¡ or the state into the output ð ¦ð ¡.
2312.00752#25
2312.00752#27
2312.00752
[ "2302.13971" ]
2312.00752#27
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
These can be interpreted as allowing the model to modulate the recurrent dynamics based on content (input) and context (hidden states) respectively. 3.6 Additional Model Details Real vs. Complex. Most prior SSMs use complex numbers in their state â , which is necessary for strong performance on many tasks (Gu, Goel, and Ré 2022). However, it has been empirically observed that completely real-valued SSMs seem to work ï¬ ne, and possibly even better, in some settings (Ma et al. 2023). We use real values as the default, which work well for all but one of our tasks; we hypothesize that the complex-real tradeoï¬ is related to the continuous-discrete spectrum in data modalities, where complex numbers are helpful for continuous modalities (e.g. audio, video) but not discrete (e.g. text, DNA). 9 Initialization. Most prior SSMs also suggest special initializations, particularly in the complex-valued case, which can help in several settings such as low-data regimes. Our default initialization for the complex case is S4D-Lin and for the real case is S4D-Real (Gu, Gupta, et al. 2022), which is based on the HIPPO theory (Gu, Dao, et al. 2020).
2312.00752#26
2312.00752#28
2312.00752
[ "2302.13971" ]
2312.00752#28
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
These deï¬ ne the ð -th element of A as â 1â 2 + ð ð and â (ð + 1) respectively. However, we expect many initializations to work ï¬ ne, particularly in the large-data and real-valued SSM regimes; some ablations are considered in Section 4.6. Parameterization of â . We deï¬ ned the selective adjustment to â as ð â (ð ¥) = ð ¡ð ð ð ºð ½ð ¼ð ºð ð ð ·(ð «ð ð ð ¾ð ºð 1(ð ¥)), which was motivated by the mechanics of â (Section 3.5). We observe that it can be generalized from dimension 1 to a larger dimension ð . We set this to be a small fraction of ð ³, which uses a negligible number of parameters compared to the main Linear projections in the block. We additionally note that the broadcasting operation can instead be viewed as another Linear projection, initialized to a speciï¬ c pattern of 1â s and 0â s; if this projection is trainable, this leads to the alternative ð â (ð ¥) = ð «ð ð ð ¾ð ºð ð ·(ð «ð ð ð ¾ð ºð ð (ð ¥)), which can be viewed as a low-rank projection. In our experiments, the â parameter (which can be viewed as a bias term) is initialized to ð â 1 â
2312.00752#27
2312.00752#29
2312.00752
[ "2302.13971" ]
2312.00752#29
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
following prior work on SSMs (Gu, Johnson, Timalsina, et al. 2023). Remark 3.1. For brevity in our experimental results, we sometimes abbreviate selective SSMs as S6 models, because they are S4 models with a selection mechanism and computed with a scan. # 4 Empirical Evaluation In Section 4.1 we test Mambaâ s ability to solve the two synthetic tasks motivated in Section 3.1. We then evaluate on three domains, each evaluated on autoregressive pretraining as well as downstream tasks. Section 4.2: language model pretraining (scaling laws), and zero-shot downstream evaluation. Section 4.3: DNA sequence pretraining, and ï¬ ne-tuning on a long-sequence classiï¬ cation task. Section 4.4: audio waveform pretraining, and the quality of autoregressively generated speech clips. Finally, Section 4.5 shows Mambaâ s computational eï¬ ciency at both training and inference time, and Section 4.6 ablates various components of the architecture and selective SSMs. # 4.1 Synthetic Tasks Full experiment details for these tasks including task details and training protocol are in Appendix E.1. # 4.1.1 Selective Copying The Copying task is one of the most well-studied synthetic tasks for sequence modeling, originally designed to test the memorization abilities of recurrent models. As discussed in Section 3.1, LTI SSMs (linear recurrences and global convolutions) can easily solve this task by only keeping track of time instead of reasoning about the data; for example, by constructing a convolution kernel of exactly the right length (Figure 2). This was explicitly validated in earlier work on global convolutions (Romero et al. 2021). The Selective Copying task prevents this shortcut by randomizing the spacing between tokens. Note that this task has been introduced before as the Denoising task (Jing et al. 2019). Note that many previous works argue that adding architecture gating (multiplicative interactions) can endow models with â data-dependenceâ and solve related tasks (Dao, Fu, Saab, et al. 2023; Poli et al. 2023). However, we ï¬ nd this explanation insuï¬
2312.00752#28
2312.00752#30
2312.00752
[ "2302.13971" ]
2312.00752#30
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
cient intuitively because such gating does not interact along the sequence axis, and cannot aï¬ ect the spacing between tokens. In particular architecture gating is not an instance of a selection mechanism (Appendix A). Table 1 conï¬ rms that gated architectures such as H3 and Mamba only partially improve performance, while the selection mechanism (modifying S4 to S6) easily solves this task, particularly when combined with these more powerful architectures. 10 Model Arch. Layer Acc. S4 - No gate No gate S4 S6 18.3 97.0 H3 Hyena - H3 H3 H3 S4 Hyena S6 57.0 30.1 99.7 - - Mamba Mamba Mamba Mamba Hyena S4 S6 56.4 28.4 99.8 Induction Heads Extrapolation Extrapolation 1.05 ' â â Mua-Absotute 08] ; â â MHA-RoPE i =~ MHA-xPos 6) i â HB oa = byena ' Random 1 ran benath 0.0 , ; ; : , 10° 10° 108 10° 10° Test Sequence Length > g 8 Table 1: (Selective Copying.) Accuracy for combinations of architectures and inner sequence layers. Table 2: (Induction Heads.) Models are trained on sequence length 28 = 256, and tested on increasing sequence lengths of 26 = 64 up to 220 = 1048576. Full numbers in Table 11. # 4.1.2 Induction Heads Induction heads (Olsson et al. 2022) is a simple task from the mechanistic interpretability lens (Elhage et al. 2021) that is surprisingly predictive of the in-context learning ability of LLMs. It requires models to perform associative recall and copy: for example, if the model has seen a bigram such as â
2312.00752#29
2312.00752#31
2312.00752
[ "2302.13971" ]
2312.00752#31
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Harry Potterâ in the sequence, then the next time â Harryâ appears in the same sequence, the model should be able to predict â Potterâ by copying from history. Dataset. We train a 2-layer model on the induction heads task at sequence length 256, with a vocab size of 16, which is comparable to prior work on this task (Dao, Fu, Saab, et al. 2023) but with longer sequences. We additionally investigate generalization and extrapolation abilities by evaluating on a range of sequence lengths from 26 = 64 up to 220 = 1048576 at test time. Models. Following established work on induction heads, we use 2 layer models, which allows attention to mechanistically solve the induction heads task (Olsson et al. 2022). We test both multi-head attention (8 heads, with various positional encodings) and SSM variants. We use a model dimension ð · of 64 for Mamba and 128 for the other models. Results. Table 2 shows that Mambaâ or more precisely, its selective SSM layerâ has the ability to solve the task perfectly because of its ability to selectively remember the relevant token while ignoring everything else in between. It generalizes perfectly to million-length sequences, or 4000à longer than it saw during training, while no other method goes beyond 2à . Out of positional encoding variants for attention models, xPos (which was designed for length extrapolation) is slightly better than the others; also note that all attention models were only tested up to sequence length 214 = 16384 due to memory limitations. Out of other SSMs, H3 and Hyena are similar, contrary to the ï¬ ndings in Poli et al. (2023). # 4.2 Language Modeling We evaluate the Mamba architecture on standard autoregressive language modeling against other architectures, on both pretraining metrics (perplexity) and zero-shot evaluations. We set the model sizes (depth and width) to mirror GPT3 speciï¬
2312.00752#30
2312.00752#32
2312.00752
[ "2302.13971" ]
2312.00752#32
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
cations. We use the Pile dataset (L. Gao, Biderman, et al. 2020), and follow the training recipe described in Brown et al. (2020). All training details are in Appendix E.2. # 4.2.1 Scaling Laws For baselines, we compare against the standard Transformer architecture (GPT3 architecture), as well as the strongest Transformer recipe we know of (here referred to as Transformer++), based on the PaLM and LLaMa
2312.00752#31
2312.00752#33
2312.00752
[ "2302.13971" ]
2312.00752#33
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
11 Scaling Laws on The Pile (Sequence Length 2048) Scaling Laws on The Pile (Sequence Length 8192) 2x10" 2x10 Hyena Hyena RWKV s RWKV â â Transformer Fy â â Transformer fd RetNet 2 â â RetNet 3+ 2 â HH wd â = Transformers |, | â â Transformert+ â â Mamba zg â â Mamba 2 2 S a 6x 10° 1 7 6x 10° 1 7 10"? 102 10 107° FLOPs (log scale) FLOPs (log scale) s 8 fd 2 2 > 3 2 2 S a Figure 4: (Scaling Laws.) Models of size â 125ð to â 1.3ð µ parameters, trained on the Pile. Mamba scales better than all other attention-free models and is the first to match the performance of a very strong â Transformer++â
2312.00752#32
2312.00752#34
2312.00752
[ "2302.13971" ]
2312.00752#34
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
recipe that has now become standard, particularly as the sequence length grows. architectures (e.g. rotary embedding, SwiGLU MLP, RMSNorm instead of LayerNorm, no linear bias, and higher learning rates). We also compare against other recent subquadratic architectures (Figure 4). All model details are in Appendix E.2. Figure 4 shows scaling laws under the standard Chinchilla (Hoï¬ mann et al. 2022) protocol, on models from â 125ð to â 1.3ð µ parameters. Mamba is the ï¬ rst attention-free model to match the performance of a very strong Transformer recipe (Transformer++) that has now become standard, particularly as the sequence length grows. We note that full results on context length 8k are missing for the RWKV and RetNet baselines, prior strong recurrent models that can also be interpreted as SSMs, due to a lack of eï¬ cient implementation leading to out-of-memory or unrealistic computation requirements. # 4.2.2 Downstream Evaluations Table 3 shows the performance of Mamba on a range of popular downstream zero-shot evaluation tasks. We compare against the most well-known open source models at these sizes, most importantly Pythia (Biderman et al. 2023) and RWKV (B. Peng et al. 2023) which were trained with the same tokenizer, dataset, and training length (300B tokens) as our models. (Note that Mamba and Pythia are trained with context length 2048, while RWKV was trained with context length 1024.) # 4.3 DNA Modeling Motivated by the success of large language models, there has been recent exploration into using the foundation model paradigm for genomics. DNA has been likened to language in that it consists of sequences of discrete tokens with a ï¬ nite vocab. It is also known for requiring long-range dependencies to model (Avsec et al. 2021). We investigate Mamba as a FM backbone for pretraining and ï¬ ne-tuning in the same setting as recent works on long-sequence models for DNA (Nguyen, Poli, et al. 2023). In particular, we focus on two explorations of scaling laws across model size and sequence length (Figure 5), and a diï¬ cult downstream synthetic classiï¬
2312.00752#33
2312.00752#35
2312.00752
[ "2302.13971" ]
2312.00752#35
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
cation task requiring long context (Figure 6). For pretraining, we largely follow a standard causal language modeling (next token prediction) setup for the training and model details (see also Appendix E.2). For the dataset, we largely follow the setup of HyenaDNA (Nguyen, Poli, et al. 2023), which uses the HG38 dataset for pretraining consisting of a single human genome with about 4.5 billion tokens (DNA base pairs) in the training split.
2312.00752#34
2312.00752#36
2312.00752
[ "2302.13971" ]
2312.00752#36
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
# 4.3.1 Scaling: Model Size In this experiment, we investigate the scaling properties of genomics foundation models with various model backbones (Figure 5 Left). Training. To advantage the baselines, we train on a short sequence length of 1024; as shown in Section 4.3.2, we expect results to favor Mamba even more at longer sequence lengths. We ï¬ x a global batch size of 1024, for a 12 Table 3: (Zero-shot Evaluations.) Best results for each size in bold. We compare against open source LMs with various tokenizers, trained for up to 300B tokens. Pile refers to the validation split, comparing only against models trained on the same dataset and tokenizer (GPT-NeoX-20B). For each model size, Mamba is best-in-class on every single evaluation result, and generally matches baselines at twice the model size. Model Token.
2312.00752#35
2312.00752#37
2312.00752
[ "2302.13971" ]
2312.00752#37
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Pile ppl â LAMBADA LAMBADA HellaSwag ppl â acc â acc â acc â acc â acc â acc â Hybrid H3-130M GPT2 â Pythia-160M Mamba-130M NeoX NeoX 29.64 10.56 89.48 38.10 16.07 25.77 33.0 44.3 31.7 30.2 35.3 64.2 61.4 64.5 44.4 43.2 48.0 24.2 24.1 24.3 50.6 51.9 51.9 40.1 40.6 44.7 Hybrid H3-360M GPT2 â Pythia-410M Mamba-370M NeoX NeoX 9.95 8.28 12.58 10.84 8.14 48.0 51.4 55.6 41.5 40.6 46.5 68.1 66.9 69.5 51.4 52.1 55.1 24.7 24.6 28.0 54.1 53.8 55.3 48.0 48.2 50.0 Pythia-1B Mamba-790M NeoX NeoX 7.82 7.33 7.92 6.02 56.1 62.7 47.2 55.1 70.7 72.1 57.0 61.2 27.1 29.5 53.5 56.1 51.9 57.1 GPT-Neo 1.3B Hybrid H3-1.3B OPT-1.3B Pythia-1.4B RWKV-1.5B Mamba-1.4B GPT2 â GPT2 â â
2312.00752#36
2312.00752#38
2312.00752
[ "2302.13971" ]
2312.00752#38
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
OPT 7.51 NeoX 7.70 NeoX NeoX 6.80 7.50 11.25 6.64 6.08 7.04 5.04 57.2 49.6 58.0 61.7 56.4 64.9 48.9 52.6 53.7 52.1 52.5 59.1 71.1 71.3 72.4 71.0 72.4 74.2 56.2 59.2 56.7 60.5 60.5 65.5 25.9 28.1 29.6 28.5 29.4 32.8 54.9 56.9 59.5 57.2 54.6 61.5 52.4 53.0 55.0 55.2 54.3 59.7 GPT-Neo 2.7B Hybrid H3-2.7B OPT-2.7B Pythia-2.8B RWKV-3B Mamba-2.8B GPT2 â GPT2 â â
2312.00752#37
2312.00752#39
2312.00752
[ "2302.13971" ]
2312.00752#39
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
OPT 6.73 NeoX 7.00 NeoX NeoX 6.22 5.63 7.92 5.12 5.04 5.24 4.23 62.2 55.7 63.6 64.7 63.9 69.2 55.8 59.7 60.6 59.3 59.6 66.1 72.1 73.3 74.8 74.0 73.7 75.2 61.1 65.6 60.8 64.1 67.8 69.7 30.2 32.3 31.3 32.9 33.1 36.3 57.6 61.4 61.0 59.7 59.6 63.5 56.5 58.0 58.7 59.1 59.6 63.3 GPT-J-6B OPT-6.7B Pythia-6.9B RWKV-7.4B GPT2 OPT NeoX NeoX â â 6.51 6.31 4.10 4.25 4.45 4.38 68.3 67.7 67.1 67.2 66.3 67.2 64.0 65.5 75.4 76.3 75.2 76.1 67.0 65.6 67.3 67.8 36.6 34.9 35.5 37.5 64.1 65.5 61.3 61.0 63.0 62.9 61.7 62.5 total of 220 â 1ð tokens per batch. Models were trained for 10ð ¾ gradient steps for a total of 10ð µ tokens. Results. Figure 5 (Left) shows that Mambaâ s pretraining perplexity improves smoothly with model size, and that Mamba scales better than both HyenaDNA and Transformer++. For example, at the largest model size of â 40ð parameters, the curve shows that Mamba can match the Transformer++ and HyenaDNA models with roughly 3Ã to 4Ã fewer parameters.
2312.00752#38
2312.00752#40
2312.00752
[ "2302.13971" ]
2312.00752#40
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
# 4.3.2 Scaling: Context Length In the next DNA experiment, we investigate the scaling properties of models with respect to sequence length. We only compare the HyenaDNA and Mamba models, as quadratic attention becomes prohibitively expensive at longer sequence lengths. We pretrain models on sequence lengths 210 = 1024, 212 = 4096, 214 = 16384, 216 = 65536, 218 = 262144, 220 = 1048576.
2312.00752#39
2312.00752#41
2312.00752
[ "2302.13971" ]
2312.00752#41
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
We ï¬ x a model size of 6 layers by width 128 (about 1.3M-1.4M parameters). Models were trained for 20ð ¾ gradient steps for a total of â 330ð µ tokens. The longer sequence lengths used sequence length warmup similar to (Nguyen, Poli, et al. 2023). Results. Figure 5 (Right) shows that Mamba is able to make use of longer context even up to extremely long sequences of length 1M, and its pretraining perplexity improves as the context increases. On the other hand, the HyenaDNA model gets worse with sequence length. This is intuitive from the discussion in Section 3.5 on properties of the selection mechanism. In particular, LTI models cannot selectively ignore information; from a convolutional perspective, a very long convolution kernel is aggregating all information across a long sequence
2312.00752#40
2312.00752#42
2312.00752
[ "2302.13971" ]
2312.00752#42
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
13 Scaling Laws on the Human Genome (HG38) Scaling Laws - Sequence Length (HG38) â â HyenaDNa 1.4m â = Mamba 1.4M â â Mamba 7M ae â â HyenaDNA 3.00 4 â Mamba â â Transformert+ 2.98 | Perplexity Perplexity 2.80 4 284 2.754 274 r T r r r ; 10° 107 103 10 105 10° Parameters (log scale) Sequence Length Figure 5: (DNA Scaling Laws.) Pretraining on the HG38 (human genome) dataset. (Left) Fixing short context length 210 = 1024 and increasing size from â 200ð ¾ to â 40ð
2312.00752#41
2312.00752#43
2312.00752
[ "2302.13971" ]
2312.00752#43
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
parameters, Mamba scales better than baselines. (Right) Fixing model size and increasing sequence lengths while keeping tokens/batch and total training tokens fixed. Unlike baselines, the selection mechanism of Mamba facilitates better performance with increasing context length. Finetuning Accuracy (Species DNA Classification) 0.8] â â HyenaDNA1.4M 0.7-| â â Mamba 1.4m â â Mamba 7M mag] â â Random g 5 os 3 â 8 oA 034 024 --------------------------------- T T T T 103 10¢ 108 10 Sequence Length Scaling Laws - Sequence Length (YouTubeMix) 1.475 â â SA+FEN 1.450 4 â â Mamba @ 1.4254 2 1.400 4 5 o 1.375 4 © 1.3504 1.325 4 1.300 T T T 10* 10° 10 Sequence Length Figure 6: (Great Apes DNA Classification.) Accuracy after fine-tuning on sequences of length 210 = 1024 up to 220 = 1048576 using pretrained models of the same context length. Nu- merical results in Table 13. Figure 7: (Audio Pretraining.) Mamba improves performance over prior state-of-the-art (Sashimi) in autoregressive audio mod- eling, while improving up to minute-long context or million- length sequences (controlling for computation). which may be very noisy. Note that while HyenaDNA claims to improve with longer context, their results do not control for computation time. # 4.3.3 Synthetic Species Classification We evaluate models on a downstream task of classifying between 5 diï¬ erent species by randomly sampling a contigu- ous segment of their DNA. This task is adapted from HyenaDNA, which used the species {human, lemur, mouse, pig, hippo}. We modify the task to be signiï¬ cantly more challenging by classifying between the ï¬
2312.00752#42
2312.00752#44
2312.00752
[ "2302.13971" ]
2312.00752#44
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
ve great apes species {human, chimpanzee, gorilla, orangutan, bonobo}, which are known to share 99% of their DNA. # 4.4 Audio Modeling and Generation For the audio waveform modality, we compare primarily to the SaShiMi architecture and training protocols (Goel et al. 2022). This model comprises 1. a U-Net backbone with two stages of pooling by a factor ð that doubles the model dimension ð · per stage, 2. alternating S4 and MLP blocks in each stage. We consider replacing the S4+MLP blocks with Mamba blocks. Experiment details are in Appendix E.4. # 4.4.1 Long-Context Autoregressive Pretraining We evaluate pretraining quality (autoregressive next-sample prediction) on YouTubeMix (DeepSound 2017), a standard piano music dataset used by prior work consisting of 4 hours of solo piano music, sampled at a rate of
2312.00752#43
2312.00752#45
2312.00752
[ "2302.13971" ]
2312.00752#45
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
14 16000 Hz Pretraining details largely follow the standard language modeling setup (Section 4.2). Figure 7 evaluates the eï¬ ect of increasing training sequence lengths from 213 = 8192 to 220 â 106, while keeping computation ï¬ xed. (There are some slight edge cases to the way the data is curated, which may lead to kinks in the scaling curves. For example, only minute-long clips were available so the maximum sequence length is actually bounded by 60ð â 16000ð »ð § = 960000.)
2312.00752#44
2312.00752#46
2312.00752
[ "2302.13971" ]
2312.00752#46
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Both Mamba and the SaShiMi (S4+MLP) baseline improve consistently with longer context lengths; Mamba is better throughout, and the gap widens at longer lengths. The main metric is bits per byte (BPB), which is a constant factor log(2) of the standard negative log-likelihood (NLL) loss for pretraining other modalities. We note one important detail: this is the only experiment in this paper in which we switched from the real parameterization to complex (Section 3.6). We show additional ablations in Appendix E.4. # 4.4.2 Autoregressive Speech Generation SC09 is a benchmark speech generation dataset (Donahue, McAuley, and Puckette 2019; Warden 2018), consisting of 1-second clips sampled at 16000 Hz of the digits â
2312.00752#45
2312.00752#47
2312.00752
[ "2302.13971" ]
2312.00752#47
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
zeroâ through â nineâ with highly variable characteristics. We largely follow the autoregressive training setup and generation protocol of Goel et al. (2022). Table 4 shows automated metrics of the Mamba-UNet model compared to a variety of baselines from Goel et al. (2022): WaveNet (Oord et al. 2016), SampleRNN (Mehri et al. 2017), WaveGAN (Donahue, McAuley, and Puckette 2019), Diï¬ Wave (Z. Kong et al. 2021), and SaShiMi. A small Mamba model outperforms the state-of-the-art (and much larger) GAN- and diï¬ usion- based models. A larger model parameter-matched to the baselines further improves on ï¬ delity metrics dramatically. Table 5 takes the small Mamba model and investigates combinations of diï¬ erent architectures for the outer stages and center stage. It shows that Mamba is consistently better than S4+MLP in the outer blocks, and Mamba > S4+MLP > MHA+MLP in the center blocks. Table 4: (SC09) Automated metrics for unconditional generation on a challenging dataset of fixed-length speech clips. (Top to Bottom) Autoregressive baselines, non-autoregressive baselines, Mamba, and dataset metrics. Table 5: (SC09 Model Ablations) Models with 6M parameters.
2312.00752#46
2312.00752#48
2312.00752
[ "2302.13971" ]
2312.00752#48
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
In SaShiMiâ s U-Net backbone, there are 8 center blocks operat- ing on sequence length 1000, sandwiched on each side by 8 outer blocks on sequence length 4000, sandwiched by 8 outer blocks on sequence length 16000 (40 blocks total). The architecture of the 8 center blocks are ablated independently of the rest. Note that Transformers (MHA+MLP) were not tested in the more im- portant outer blocks because of efficiency constraints. Model Params NLL â FID â IS â mIS â AM â SampleRNN WaveNet SaShiMi 35.0M 4.2M 5.8M 2.042 1.925 1.873 8.96 5.08 1.99 1.71 2.27 5.13 3.02 5.80 42.57 1.76 1.47 0.74 WaveGAN DiffWave + SaShiMi Mamba Mamba Train Test 19.1M 24.1M 23.0M 6.1M 24.3M - - - - - 1.852 1.860 - - 2.03 1.92 1.42 0.94 0.67 0.00 0.02 4.90 5.26 5.94 6.26 7.33 8.56 8.33 36.10 51.21 69.17 88.54 144.9 292.5 257.6 0.80 0.68 0.59 0.52 0.36 0.16 0.19 Outer Center S4+MLP MHA+MLP S4+MLP S4+MLP Mamba Mamba Mamba Mamba S4+MLP MHA+MLP S4+MLP Mamba NLL â 1.859 1.867 1.859 1.850 1.853 1.852 FID â 1.45 1.43 1.42 1.37 1.07 0.94 IS â 5.06 5.42 5.71 5.63 6.05 6.26 mIS â
2312.00752#47
2312.00752#49
2312.00752
[ "2302.13971" ]
2312.00752#49
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
47.03 53.54 56.51 58.23 73.34 88.54 AM â 0.70 0.65 0.64 0.62 0.55 0.52 4.5 Speed and Memory Benchmarks We benchmark the speed of the SSM scan operation (state expansion ð = 16), as well as the end-to-end inference throughput of Mamba, in Figure 8. Our eï¬ cient SSM scan is faster than the best attention implementation that we know of (FlashAttention-2 (Dao 2023)) beyond sequence length 2K, and up to 20-40à faster than a standard scan implementation in PyTorch. Mamba achieves 4-5à higher inference throughput than a Transformer of similar size, since without the KV cache it can use much higher batch sizes. For example, a Mamba-6.9B (untrained) would have higher inference throughput than a 5à smaller Transformer-1.3B. Details in Appendix E.5, which additionally includes a benchmark of memory consumption. 15 Scan vs Convolution vs Attention time (A100 80GB PCle) Inference throughput on A100 80GB (prompt length 2048) â Flashattention-2 ame ee ES 1000-1 â convolution @ 1500] mm Mamba 6.98 wwe â â Scan (PyTorch) Py mmm Transformer 6.78 100 4 â â Scan (ours) Ei % 00M 2 a tod S 1000 B us Ff = 2 500 â = pad oid r S12 1k 2k «= 4k BKK 32K GK 128k 256K 512k 1 2 Hi A 16 32 oa 128 Sequence length Batch size @ = ~ £ Figure 8: (Efficiency Benchmarks.) (Left) Training: our efficient scan is 40à faster than a standard implementation. (Right) Inference: as a recurrent model, Mamba can achieve 5à higher throughput than Transformers. # 4.6 Model Ablations We perform a series of detailed ablations on components of our model, focusing on the setting of language modeling with size â
2312.00752#48
2312.00752#50
2312.00752
[ "2302.13971" ]
2312.00752#50
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
350M models at Chinchilla token counts (same setting as Figure 4). # 4.6.1 Architecture Table 6 investigates the eï¬ ects of the architecture (block) and its inner SSM layer (Figure 3). We ï¬ nd that â ¢ Among previous non-selective (LTI) SSMs, which are equivalent to global convolutions, performance is very similar. â ¢ Replacing the complex-valued S4 variant from previous work with a real-valued one does not aï¬ ect performance much, suggesting that (at least for LM) real-valued SSMs may be a better choice when accounting for hardware eï¬ ciency. â ¢ Replacing any of these with a selective SSM (S6) signiï¬ cantly improves performance, validating the motivation of Section 3. â ¢ The Mamba architecture performs similarly to the H3 architecture (and seems slightly better when using a selective layer). We also investigate interleaving the Mamba block with other blocks such as MLP (a traditional architecture) MHA (a hybrid attention architecture) in Appendix E.2.2. # 4.6.2 Selective SSM Table 7 ablates the selective SSM layer by considering diï¬ erent combinations of selective â , B, and C param- eters (Algorithm 2), showing that â is the most important parameter due to its connection to RNN gating (Theorem 1). Table 8 considers diï¬ erent initializations of the SSM, which have been shown to make a large diï¬ erence in some data modalities and settings (Gu, Goel, and Ré 2022; Gu, Gupta, et al. 2022). On language modeling, we ï¬ nd that simpler real-valued diagonal initializations (S4D-Real, row 3) instead of more standard complex-valued parameterizations (S4D-Lin, row 1) perform better. Random initializations also work well, consistent with ï¬ ndings from prior work (Mehta et al. 2023). Table 9 and Table 10 consider varying the dimension of the â and (B, C) projections respectively. Changing them from static to selective provides the most beneï¬ t, while increasing the dimensions further generally improves performance modestly with a small increase in parameter count.
2312.00752#49
2312.00752#51
2312.00752
[ "2302.13971" ]
2312.00752#51
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Of particular note is the dramatic improvement of the selective SSM when the state size ð is increased, with over a 1.0 perplexity improvement for a cost of only 1% additional parameters. This validates our core motivation in Sections 3.1 and 3.3. 16 Table 6: (Ablations: Architecture and SSM layer.) The Mamba block performs similarly to H3 while being simpler. In the inner layer, there is little difference among different parameterizations of LTI models, while selective SSMs (S6) provide a large improvement. More specifically, the S4 (real) variant is S4D-Real and the S4 (complex) variant is S4D-Lin. Model Arch. SSM Layer Perplexity Model Arch. SSM Layer Perplexity Hyena H3 H3 H3 H3 - H3 - Hyena S4 (complex) S4 (real) S6 10.24 10.30 10.34 8.95 Mamba Hyena - Mamba - - Mamba Mamba Mamba S4 (complex) S4 (real) S6 10.75 10.54 10.56 8.69 Table 7: (Ablations: Selective parameters.) â is the most im- portant parameter (Theorem 1), but using multiple selective pa- rameters together synergizes. Table 8: (Ablations: Parameterization of A.) The more standard initializations based on S4D-Lin (Gu, Gupta, et al. 2022) perform worse than S4D-Real or a random initializa- tion, when the SSM is selective. Selective A Selective B SelectiveC Perplexity \Qx& xX Qk *®QX Qk Q&X 1093 10.15 9.98 9.81 8.71
2312.00752#50
2312.00752#52
2312.00752
[ "2302.13971" ]
2312.00752#52
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Að Initialization Að = â 1 Complex Real Að = â 1â 2 Að = â (ð + 1) Real Að â ¼ exp(ð ©(0, 1)) Real Field + ð ð 2 9.16 8.85 8.71 8.71 Table 9: (Ablations: Expressivity of â .) The selection mechanism of â constructs it with a projection of the input. Project- ing it even to dim. 1 provides a large in- crease in performance; increasing it fur- ther provides further improvements at the cost of a modest increase in parameters. State size fixed to ð = 16. Size of â proj. - 1 2 4 8 16 32 64 Params (M) 358.9 359.1 359.3 359.7 360.5 362.1 365.2 371.5 9.12 8.97 8.97 8.91 8.83 8.84 8.80 8.71 # Perplexity Table 10: (Ablations: SSM state dimension.) (Top) Constant B and C (Bottom) Selective B and C. Increasing the SSM state dimension ð , which can be viewed as an expansion factor on the dimension of the recurrent state, can significantly improve performance for a negligible cost in parameters/FLOPs, but only when B and C are also selective. Size of â projection fixed to 64. State dimension ð Params (M) Perplexity 1 2 4 8 16 1 2 4 8 16 367.1 367.4 368.0 369.1 371.5 367.1 367.4 368.0 369.1 371.5 9.88 9.86 9.82 9.82 9.81 9.73 9.40 9.09 8.84 8.71 # 5 Discussion We discuss related work, limitations, and some future directions. Related Work. Appendix A discusses how the selection mechanism relates to similar concepts. Appendix B has an extended related work of SSMs and other related models.
2312.00752#51
2312.00752#53
2312.00752
[ "2302.13971" ]
2312.00752#53
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
No Free Lunch: Continuous-Discrete Spectrum. Structured SSMs were originally deï¬ ned as discretizations of continuous systems (1), and have had a strong inductive bias toward continuous-time data modalities such as perceptual signals (e.g. audio, video). As discussed in Sections 3.1 and 3.5, the selection mechanism overcomes their weaknesses on discrete modalities such as text and DNA; but this conversely can impede their performance 17 on data that LTI SSMs excel on. Our ablations on audio waveforms examine this tradeoï¬ in more detail. Downstream Affordances. Transformer-based foundation models (particularly LLMs) have a rich ecosystem of properties and modes of interaction with pretrained models, such as ï¬ ne-tuning, adaptation, prompting, in-context learning, instruction tuning, RLHF, quantization, and so on. We are particularly interested in whether Transformer alternatives such as SSMs have similar properties and aï¬ ordances. Scaling. Our empirical evaluation is limited to small model sizes, below the threshold of most strong open source LLMs (e.g. Llama (Touvron et al. 2023)) as well as other recurrent models such as RWKV (B. Peng et al. 2023) and RetNet (Y. Sun et al. 2023), which have been evaluated at the 7B parameter scale and beyond. It remains to assess whether Mamba still compares favorably at these larger sizes. We also note that scaling SSMs may involve further engineering challenges and adjustments to the model that are not discussed in this paper. # 6 Conclusion We introduce a selection mechanism to structured state space models, allowing them to perform context-dependent reasoning while scaling linearly in sequence length. When incorporated into a simple attention-free architecture, Mamba achieves state-of-the-art results on a diverse set of domains, where it matches or exceeds the performance of strong Transformer models. We are excited about the broad applications of selective state space models to build foundation models for diï¬ erent domains, especially in emerging modalities requiring long context such as genomics, audio, and video. Our results suggest that Mamba is a strong candidate to be a general sequence model backbone. # Acknowledgments We thank Karan Goel, Arjun Desai, and Kush Bhatia for helpful feedback on the draft. # References
2312.00752#52
2312.00752#54
2312.00752
[ "2302.13971" ]
2312.00752#54
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
[1] Martin Arjovsky, Amar Shah, and Yoshua Bengio. â Unitary Evolution Recurrent Neural Networksâ . In: The International Conference on Machine Learning (ICML). 2016, pp. 1120â 1128. iga Avsec, Vikram Agarwal, Daniel Visentin, Joseph R Ledsam, Agnieszka Grabska-Barwinska, Kyle R Taylor, Yannis Assael, John Jumper, Pushmeet Kohli, and David R Kelley. â Effective Gene Expression Prediction from Sequence by Integrating Long-range Interactionsâ . In: Nature Methods 18.10 (2021), pp. 1196â 1203. Jimmy Ba, Geoffrey E Hinton, Volodymyr Mnih, Joel Z Leibo, and Catalin Ionescu. â Using Fast Weights to Attend to the Recent Pastâ . In: Advances in Neural Information Processing Systems (NeurIPS) 29 (2016).
2312.00752#53
2312.00752#55
2312.00752
[ "2302.13971" ]
2312.00752#55
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. â Layer Normalizationâ . In: arXiv preprint arXiv:1607.06450 (2016). [2] [3] [4] [5] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. â Neural Machine Translation by Jointly Learning to Align and Translateâ . In: The International Conference on Learning Representations (ICLR). 2015. [6] David Balduzzi and Muhammad Ghifary. â Strongly-typed Recurrent Neural Networksâ . In: International Con- ference on Machine Learning. PMLR. 2016, pp. 1292â 1300.
2312.00752#54
2312.00752#56
2312.00752
[ "2302.13971" ]
2312.00752#56
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
[7] Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle OBrien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. â Pythia: A Suite for Analyzing Large Language Models across Training and Scalingâ . In: The International Conference on Machine Learning (ICML). PMLR. 2023, pp. 2397â 2430. [8] Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. â PIQA: Reasoning about Physical Commonsense in Natural Languageâ . In: Proceedings of the AAAI conference on Artificial Intelligence. Vol. 34. 05. 2020, pp. 7432â 7439. [9] Guy E Blelloch. â Prefix Sums and Their Applicationsâ . In: (1990). [10] James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. â
2312.00752#55
2312.00752#57
2312.00752
[ "2302.13971" ]
2312.00752#57
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Quasi-recurrent Neural Networksâ . In: arXiv preprint arXiv:1611.01576 (2016). 18 [11] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Nee- lakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. â Language Models are Few-shot Learnersâ . In: Advances in Neural Information Processing Systems (NeurIPS) 33 (2020), pp. 1877â 1901.
2312.00752#56
2312.00752#58
2312.00752
[ "2302.13971" ]
2312.00752#58
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
[12] Aydar Bulatov, Yuri Kuratov, and Mikhail S Burtsev. â Scaling Transformer to 1M tokens and Beyond with RMTâ . In: arXiv preprint arXiv:2304.11062 (2023). [13] Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. â Generating Long Sequences with Sparse Trans- formersâ . In: arXiv preprint arXiv:1904.10509 (2019). [14] Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Pe- ter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. â Rethinking Attention with Performersâ . In: The International Conference on Learning Representations (ICLR). 2021. [15] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. â PaLM: Scaling Language Modeling with Pathwaysâ . In: Journal of Machine Learning Research 24.240 (2023), pp. 1â 113. url: http://jmlr.org/ papers/v24/22-1144.html. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. â Empirical Evaluation of Gated Re- current Neural Networks on Sequence Modelingâ
2312.00752#57
2312.00752#59
2312.00752
[ "2302.13971" ]
2312.00752#59
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
. In: arXiv preprint arXiv:1412.3555 (2014). [17] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. â Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challengeâ . In: arXiv preprint arXiv:1803.05457 (2018). [18] Tri Dao. â FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioningâ . In: (2023). [19] Tri Dao, Daniel Y Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. â FlashAttention: Fast and Memory- Efficient Exact Attention with IO-Awarenessâ . In: Advances in Neural Information Processing Systems (NeurIPS). 2022. [20] Tri Dao, Daniel Y Fu, Khaled K Saab, Armin W Thomas, Atri Rudra, and Christopher Ré. â
2312.00752#58
2312.00752#60
2312.00752
[ "2302.13971" ]
2312.00752#60
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Hungry Hungry Hippos: Towards Language Modeling with State Space Modelsâ . In: The International Conference on Learning Representations (ICLR). 2023. [21] Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. â Language Modeling with Gated Convolu- tional Networksâ . In: The International Conference on Machine Learning (ICML). PMLR. 2017, pp. 933â 941. # [22] DeepSound. SampleRNN. https://github.com/deepsound-project/samplernn-pytorch. 2017. [23] Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, and Furu Wei. â LongNet:
2312.00752#59
2312.00752#61
2312.00752
[ "2302.13971" ]
2312.00752#61
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Scaling Transformers to 1,000,000,000 Tokensâ . In: arXiv preprint arXiv:2307.02486 (2023). [24] Chris Donahue, Julian McAuley, and Miller Puckette. â Adversarial Audio Synthesisâ . In: The International Conference on Learning Representations (ICLR). 2019. [25] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. â An Image is Worth 16x16 Words: Transformers for Image Recognition at Scaleâ . In: The International Conference on Learning Representations (ICLR). 2020. [26] Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. â
2312.00752#60
2312.00752#62
2312.00752
[ "2302.13971" ]
2312.00752#62
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
A Mathematical Framework for Transformer Circuitsâ . In: Transformer Circuits Thread (2021). https://transformer-circuits.pub/2021/framework/index.html. [27] Mahan Fathi, Jonathan Pilault, Pierre-Luc Bacon, Christopher Pal, Orhan Firat, and Ross Goroshin. â Block- State Transformerâ . In: arXiv preprint arXiv:2306.09539 (2023). [28] Yassir Fathullah, Chunyang Wu, Yuan Shangguan, Junteng Jia, Wenhan Xiong, Jay Mahadeokar, Chunxi Liu, Yangyang Shi, Ozlem Kalinli, Mike Seltzer, et al. â Multi-Head State Space Model for Sequence Modelingâ . In: INTERSPEECH. 2023. [29] Karl J Friston, Lee Harrison, and Will Penny. â Dynamic Causal Modellingâ . In: Neuroimage 19.4 (2003), pp. 1273â 1302. [30] Daniel Y Fu, Elliot L Epstein, Eric Nguyen, Armin W Thomas, Michael Zhang, Tri Dao, Atri Rudra, and Christo- pher Ré. â
2312.00752#61
2312.00752#63
2312.00752
[ "2302.13971" ]
2312.00752#63
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Simple Hardware-efficient Long Convolutions for Sequence Modelingâ . In: The International Confer- ence on Machine Learning (ICML) (2023). [31] Ken-ichi Funahashi and Yuichi Nakamura. â Approximation of Dynamical Systems by Continuous Time Recur- rent Neural Networksâ . In: Neural Networks 6.6 (1993), pp. 801â 806. 19 [32] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. â
2312.00752#62
2312.00752#64
2312.00752
[ "2302.13971" ]
2312.00752#64
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
The Pile: An 800GB Dataset of Diverse Text for Language Modelingâ . In: arXiv preprint arXiv:2101.00027 (2020). [33] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A Framework for Few-shot Language Model Evaluation.
2312.00752#63
2312.00752#65
2312.00752
[ "2302.13971" ]
2312.00752#65
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Version v0.0.1. Sept. 2021. doi: 10.5281/zenodo.5371628. url: https://doi.org/10.5281/zenodo.5371628. [34] Karan Goel, Albert Gu, Chris Donahue, and Christopher Ré. â Itâ s Raw! Audio Generation with State-Space Modelsâ . In: The International Conference on Machine Learning (ICML). 2022. [35] Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, and Christopher Ré. â
2312.00752#64
2312.00752#66
2312.00752
[ "2302.13971" ]
2312.00752#66
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
HIPPO: Recurrent Memory with Optimal Polynomial Projectionsâ . In: Advances in Neural Information Processing Systems (NeurIPS). 2020. [36] Albert Gu, Karan Goel, and Christopher Ré. â Efficiently Modeling Long Sequences with Structured State Spacesâ . In: The International Conference on Learning Representations (ICLR). 2022. [37] Albert Gu, Caglar Gulcehre, Tom Le Paine, Matt Hoffman, and Razvan Pascanu. â Improving the Gating Mech- anism of Recurrent Neural Networksâ . In: The International Conference on Machine Learning (ICML). 2020. [38] Albert Gu, Ankit Gupta, Karan Goel, and Christopher Ré. â On the Parameterization and Initialization of Diag- onal State Space Modelsâ . In: Advances in Neural Information Processing Systems (NeurIPS). 2022. [39] Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, and Christopher Ré. â Combining Recur- rent, Convolutional, and Continuous-time Models with the Linear State Space Layerâ . In: Advances in Neural Information Processing Systems (NeurIPS). 2021. [40] Albert Gu, Isys Johnson, Aman Timalsina, Atri Rudra, and Christopher Ré. â
2312.00752#65
2312.00752#67
2312.00752
[ "2302.13971" ]
2312.00752#67
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
How to Train Your HIPPO: State Space Models with Generalized Basis Projectionsâ . In: The International Conference on Learning Representations (ICLR). 2023. [41] Ankit Gupta, Albert Gu, and Jonathan Berant. â Diagonal State Spaces are as Effective as Structured State Spacesâ . In: Advances in Neural Information Processing Systems 35 (2022), pp. 22982â 22994. [42] David Ha, Andrew Dai, and Quoc V. Le. â HyperNetworksâ . In: The International Conference on Learning Rep- resentations (ICLR). 2017. [43] Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. â Dream to Control: Learning Behav- iors by Latent Imaginationâ
2312.00752#66
2312.00752#68
2312.00752
[ "2302.13971" ]
2312.00752#68
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
. In: The International Conference on Learning Representations (ICLR). 2020. [44] Ramin Hasani, Mathias Lechner, Tsun-Hsuan Wang, Makram Chahine, Alexander Amini, and Daniela Rus. â Liquid Structural State-Space Modelsâ . In: The International Conference on Learning Representations (ICLR). 2023. [45] Mikael Henaff, Arthur Szlam, and Yann LeCun. â Recurrent Orthogonal Networks and Long-Memory Tasksâ
2312.00752#67
2312.00752#69
2312.00752
[ "2302.13971" ]
2312.00752#69
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
. In: The International Conference on Machine Learning (ICML). 2016. [46] Dan Hendrycks and Kevin Gimpel. â Gaussian Error Linear Units (GELUs)â . In: arXiv preprint arXiv:1606.08415 (2016). [47] Sepp Hochreiter and Jürgen Schmidhuber. â Long Short-Term Memoryâ . In: Neural Computation 9.8 (1997), pp. 1735â 1780. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. â
2312.00752#68
2312.00752#70
2312.00752
[ "2302.13971" ]
2312.00752#70
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
An Empirical Analysis of Compute- Optimal Large Language Model Trainingâ . In: Advances in Neural Information Processing Systems (NeurIPS) 35 (2022), pp. 30016â 30030. 48 [49] Weizhe Hua, Zihang Dai, Hanxiao Liu, and Quoc Le. â Transformer Quality in Linear Timeâ . In: The Interna- tional Conference on Machine Learning (ICML). PMLR. 2022, pp. 9099â
2312.00752#69
2312.00752#71
2312.00752
[ "2302.13971" ]
2312.00752#71
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
9117. [50] Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. â Deep Learning for Time Series Classification: A Reviewâ . In: Data Mining and Knowledge Discovery 33.4 (2019), pp. 917â 963. [51] Andrei Ivanov, Nikoli Dryden, Tal Ben-Nun, Shigang Li, and Torsten Hoefler. â Data Movement is All You Need: A Case Study on Optimizing Transformersâ . In: Proceedings of Machine Learning and Systems 3 (2021), pp. 711â 732. [52] Li Jing, Caglar Gulcehre, John Peurifoy, Yichen Shen, Max Tegmark, Marin Soljacic, and Yoshua Bengio. â Gated Orthogonal Recurrent Units: On Learning to Forgetâ . In: Neural Computation 31.4 (2019), pp. 765â 783. [53] Rudolph Emil Kalman. â A New Approach to Linear Filtering and Prediction Problemsâ . In: (1960). 20 [54] Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. â
2312.00752#70
2312.00752#72
2312.00752
[ "2302.13971" ]
2312.00752#72
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Transformers are RNNs: Fast Autoregressive Transformers with Linear Attentionâ . In: International Conference on Machine Learning. PMLR. 2020, pp. 5156â 5165. [55] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. â DiffWave: A Versatile Diffusion Model for Audio Synthesisâ . In: International Conference on Learning Representations. 2021. [56] Chrysoula Kosma, Giannis Nikolentzos, and Michalis Vazirgiannis. â Time-Parameterized Convolutional Neu- ral Networks for Irregularly Sampled Time Seriesâ . In: arXiv preprint arXiv:2308.03210 (2023). [57] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. â ImageNet Classification with Deep Convolutional Neural Networksâ
2312.00752#71
2312.00752#73
2312.00752
[ "2302.13971" ]
2312.00752#73
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
. In: Advances in Neural Information Processing Systems (NeurIPS) 25 (2012). [58] Tao Lei. â When Attention Meets Fast Recurrence: Training Language Models with Reduced Computeâ . In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2021, pp. 7633â 7648. [59] Tao Lei, Yu Zhang, Sida I Wang, Hui Dai, and Yoav Artzi. â Simple Recurrent Units for Highly Parallelizable Recurrenceâ . In: arXiv preprint arXiv:1709.02755 (2017). [60] Mario Lezcano-Casado and David Martà nez-Rubio. â
2312.00752#72
2312.00752#74
2312.00752
[ "2302.13971" ]
2312.00752#74
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Cheap Orthogonal Constraints in Neural Networks: A Simple Parametrization of the Orthogonal and Unitary Groupâ . In: The International Conference on Machine Learning (ICML). 2019. [61] Yuhong Li, Tianle Cai, Yi Zhang, Deming Chen, and Debadeepta Dey. â What Makes Convolutional Models Great on Long Sequence Modeling?â In: The International Conference on Learning Representations (ICLR). 2023. [62] Vasileios Lioutas and Yuhong Guo. â Time-aware Large Kernel Convolutionsâ
2312.00752#73
2312.00752#75
2312.00752
[ "2302.13971" ]
2312.00752#75
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
. In: The International Conference on Machine Learning (ICML). PMLR. 2020, pp. 6172â 6183. [63] Chris Lu, Yannick Schroecker, Albert Gu, Emilio Parisotto, Jakob Foerster, Satinder Singh, and Feryal Behba- hani. â Structured State Space Models for In-Context Reinforcement Learningâ . In: Advances in Neural Informa- tion Processing Systems (NeurIPS). 2023.
2312.00752#74
2312.00752#76
2312.00752
[ "2302.13971" ]
2312.00752#76
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
[64] Shahar Lutati, Itamar Zimerman, and Lior Wolf. â Focus Your Attention (with Adaptive IIR Filters)â . In: arXiv preprint arXiv:2305.14952 (2023). [65] Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer. â Mega: Moving Average Equipped Gated Attentionâ . In: The International Conference on Learning Representations (ICLR). 2023. [66] Eric Martin and Chris Cundy. â Parallelizing Linear Recurrent Neural Nets Over Sequence Lengthâ . In: The International Conference on Learning Representations (ICLR). 2018. [67] Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, and Yoshua Bengio. â SampleRNN: An Unconditional End-to-End Neural Audio Generation Modelâ
2312.00752#75
2312.00752#77
2312.00752
[ "2302.13971" ]
2312.00752#77
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
. In: The International Conference on Learning Representations (ICLR). 2017. [68] Harsh Mehta, Ankit Gupta, Ashok Cutkosky, and Behnam Neyshabur. â Long Range Language Modeling via Gated State Spacesâ . In: The International Conference on Learning Representations (ICLR). 2023. [69] Zakaria Mhammedi, Andrew Hellicar, Ashfaqur Rahman, and James Bailey. â Efficient Orthogonal Parametri- sation of Recurrent Neural Networks using Householder Reflectionsâ . In: International Conference on Machine Learning. PMLR. 2017, pp. 2401â 2409. [70] Eric Nguyen, Karan Goel, Albert Gu, Gordon Downs, Preey Shah, Tri Dao, Stephen Baccus, and Christopher Ré. â S4ND: Modeling Images and Videos as Multidimensional Signals with State Spacesâ . In: Advances in Neural Information Processing Systems (NeurIPS). 2022. [71] Eric Nguyen, Michael Poli, Marjan Faizi, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Pa- tel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, et al. â HyenaDNA: Long-range Genomic Sequence Modeling at Single Nucleotide Resolutionâ . In: Advances in Neural Information Processing Systems (NeurIPS). 2023. [72] Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Scott Johnston, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. â In-context Learning and Induction Headsâ . In: Transformer Circuits Thread (2022). https://transformer-circuits.pub/2022/in-context-learning-and-induction- heads/index.html.
2312.00752#76
2312.00752#78
2312.00752
[ "2302.13971" ]
2312.00752#78
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
[73] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalch- brenner, Andrew Senior, and Koray Kavukcuoglu. â WaveNet: A Generative Model for Raw Audioâ . In: arXiv preprint arXiv:1609.03499 (2016). 21 [74] Antonio Orvieto, Samuel L Smith, Albert Gu, Anushan Fernando, Caglar Gulcehre, Razvan Pascanu, and So- ham De. â
2312.00752#77
2312.00752#79
2312.00752
[ "2302.13971" ]
2312.00752#79
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Resurrecting Recurrent Neural Networks for Long Sequencesâ . In: The International Conference on Machine Learning (ICML). 2023. [75] Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc-Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. â The LAMBADA Dataset: Word Prediction Requiring a Broad Discourse Contextâ
2312.00752#78
2312.00752#80
2312.00752
[ "2302.13971" ]
2312.00752#80
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 2016, pp. 1525â 1534. [76] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. â On the Difficulty of Training Recurrent Neural Net- worksâ . In: International Conference on Machine Learning. 2013, pp. 1310â 1318. [77] Bo Peng, Eric Alcaide, Quentin Anthony, Alon Albalak, Samuel Arcadinho, Huanqi Cao, Xin Cheng, Michael Chung, Matteo Grella, Kranthi Kiran GV, et al. â RWKV:
2312.00752#79
2312.00752#81
2312.00752
[ "2302.13971" ]
2312.00752#81
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Reinventing RNNs for the Transformer Eraâ . In: arXiv preprint arXiv:2305.13048 (2023). [78] Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A Smith, and Lingpeng Kong. â Random Feature Attentionâ . In: The International Conference on Learning Representations (ICLR). 2021. [79] Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, and Christopher Ré. â
2312.00752#80
2312.00752#82
2312.00752
[ "2302.13971" ]
2312.00752#82
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Hyena Hierarchy: Towards Larger Convolutional Language Modelsâ . In: The International Conference on Machine Learning (ICML). 2023. [80] Zhen Qin, Xiaodong Han, Weixuan Sun, Bowen He, Dong Li, Dongxu Li, Yuchao Dai, Lingpeng Kong, and Yiran Zhong. â Toeplitz Neural Network for Sequence Modelingâ . In: The International Conference on Learning Representations (ICLR). 2023. [81] Zhen Qin, Xiaodong Han, Weixuan Sun, Dongxu Li, Lingpeng Kong, Nick Barnes, and Yiran Zhong. â
2312.00752#81
2312.00752#83
2312.00752
[ "2302.13971" ]
2312.00752#83
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
The devil in linear transformerâ . In: arXiv preprint arXiv:2210.10340 (2022). [82] Zhen Qin, Weixuan Sun, Hui Deng, Dongxu Li, Yunshen Wei, Baohong Lv, Junjie Yan, Lingpeng Kong, and Yiran Zhong. â CosFormer: Rethinking Softmax in Attentionâ . In: The International Conference on Learning Representations (ICLR). 2022. [83] Ali Rahimi and Benjamin Recht. â Random features for large-scale kernel machinesâ . In: Advances in neural information processing systems 20 (2007). [84] Prajit Ramachandran, Barret Zoph, and Quoc V Le. â Swish: A Self-gated Activation Functionâ . In: arXiv preprint arXiv:1710.05941 7.1 (2017), p. 5. [85] David W Romero, Anna Kuzina, Erik J Bekkers, Jakub M Tomczak, and Mark Hoogendoorn. â CKConv: Con- tinuous Kernel Convolution For Sequential Dataâ . In: arXiv preprint arXiv:2102.02611 (2021). [86] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. â
2312.00752#82
2312.00752#84
2312.00752
[ "2302.13971" ]
2312.00752#84
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Winogrande: An Adversarial Wino- grad Schema Challenge at Scaleâ . In: Communications of the ACM 64.9 (2021), pp. 99â 106. [87] George Saon, Ankit Gupta, and Xiaodong Cui. â Diagonal State Space Augmented Transformers for Speech Recognitionâ . In: ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE. 2023, pp. 1â 5. Imanol Schlag, Kazuki Irie, and Jürgen Schmidhuber. â Linear Transformers are Secretly Fast Weight Program- mersâ . In: The International Conference on Machine Learning (ICML). PMLR. 2021, pp. 9355â 9366. [89] Noam Shazeer. â GLU Variants Improve Transformerâ . In: arXiv preprint arXiv:2002.05202 (2020). [90] Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H Chi, Nathanael Schärli, and Denny Zhou. â Large Language Models can be Easily Distracted by Irrelevant Contextâ . In: The International Conference on Machine Learning (ICML). PMLR. 2023, pp. 31210â 31227.
2312.00752#83
2312.00752#85
2312.00752
[ "2302.13971" ]
2312.00752#85
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Jiaxin Shi, Ke Alexander Wang, and Emily Fox. â Sequence Modeling with Multiresolution Convolutional Mem- oryâ . In: The International Conference on Machine Learning (ICML). PMLR. 2023, pp. 31312â 31327. Jimmy TH Smith, Andrew Warrington, and Scott W Linderman. â Simplified State Space Layers for Sequence Modelingâ . In: The International Conference on Learning Representations (ICLR). 2023. Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. â
2312.00752#84
2312.00752#86
2312.00752
[ "2302.13971" ]
2312.00752#86
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Roformer: Enhanced Trans- former with Rotary Position Embeddingâ . In: arXiv preprint arXiv:2104.09864 (2021). [93] [94] Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, and Furu Wei. â Retentive network: A successor to transformer for large language modelsâ . In: arXiv preprint arXiv:2307.08621 (2023). Ilya Sutskever, Oriol Vinyals, and Quoc V Le. â Sequence to Sequence Learning with Neural Networksâ
2312.00752#85
2312.00752#87
2312.00752
[ "2302.13971" ]
2312.00752#87
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
. In: Advances in Neural Information Processing Systems (NeurIPS) 27 (2014). 22 [96] Corentin Tallec and Yann Ollivier. â Can Recurrent Neural Networks Warp Time?â In: The International Con- ference on Learning Representations (ICLR). 2018. [97] Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Se- bastian Ruder, and Donald Metzler. â Long Range Arena: A Benchmark for Efficient Transformersâ
2312.00752#86
2312.00752#88
2312.00752
[ "2302.13971" ]
2312.00752#88
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
. In: Inter- national Conference on Learning Representations (ICLR). 2021. [98] Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. â Efficient Transformers: A Surveyâ . In: ACM Com- puting Surveys 55.6 (2022), pp. 1â 28. [99] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Bap- tiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. â Llama: Open and Efficient Foundation Language Modelsâ . In: arXiv preprint arXiv:2302.13971 (2023). [100] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. â Attention Is All You Needâ . In: Advances in Neural Information Processing Systems (NeurIPS). 2017. [101] Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, and Chris Pal. â On Orthogonality and Learning Recur- rent Networks with Long Term Dependenciesâ . In: International Conference on Machine Learning. PMLR. 2017, pp. 3570â 3578. Jue Wang, Wentao Zhu, Pichao Wang, Xiang Yu, Linda Liu, Mohamed Omar, and Raffay Hamid. â
2312.00752#87
2312.00752#89
2312.00752
[ "2302.13971" ]
2312.00752#89
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Selective Structured State-Spaces for Long-form Video Understandingâ . In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, pp. 6387â 6397. [102] [103] Pete Warden. â Speech Commands: A Dataset for Limited-Vocabulary Speech Recognitionâ . In: ArXiv abs/1804.03209 (2018). [104] Samuel Williams, Andrew Waterman, and David Patterson. â Roofline: An Insightful Visual Performance Model for Multicore Architecturesâ . In: Communications of the ACM 52.4 (2009), pp. 65â 76. [105] Brandon Yang, Gabriel Bender, Quoc V Le, and Jiquan Ngiam. â CondConv: Conditionally Parameterized Con- volutions for Efficient Inferenceâ . In: Advances in Neural Information Processing Systems (NeurIPS) 32 (2019). [106] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. â HellaSwag: Can a Machine Really Finish Your Sentence?â In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguis- tics. 2019. [107] Shuangfei Zhai, Walter Talbott, Nitish Srivastava, Chen Huang, Hanlin Goh, Ruixiang Zhang, and Josh Susskind. â An Attention Free Transformerâ . In: arXiv preprint arXiv:2105.14103 (2021). [108] Michael Zhang, Khaled K Saab, Michael Poli, Tri Dao, Karan Goel, and Christopher Ré. â Effectively Modeling Time Series with Simple Discrete State Spacesâ . In: The International Conference on Learning Representations (ICLR). 2023. [109] Lin Zheng, Chong Wang, and Lingpeng Kong. â Linear complexity randomized self-attention mechanismâ . In: International Conference on Machine Learning. PMLR. 2022, pp. 27011â 27041. [110] Simiao Zuo, Xiaodong Liu, Jian Jiao, Denis Charles, Eren Manavoglu, Tuo Zhao, and Jianfeng Gao. â Efficient Long Sequence Modeling via State Space Augmented Transformerâ .
2312.00752#88
2312.00752#90
2312.00752
[ "2302.13971" ]
2312.00752#90
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
In: arXiv preprint arXiv:2212.08136 (2022). 23 # A Discussion: Selection Mechanism Our selection mechanism is inspired by and related to concepts such as gating, hypernetworks, and data-dependence. It can also be viewed as related to â fast weightsâ (J. Ba et al. 2016), which connects classical RNNs with the mechanism of linear attention (Schlag, Irie, and Schmidhuber 2021). However, we believe that it is a distinct concept that is worth clarifying. Gating. Gating originally referred to the gating mechanisms of RNNs such as the LSTM (Hochreiter and Schmidhuber 1997) and GRU (J. Chung et al. 2014), or the gated equation (5)n Theorem 1. This was interpreted as a particular mechanism for controlling whether to let an input into the hidden state of an RNN. In particular, this aï¬ ects the propagation of signal through time and causes inputs to interact along the sequence length dimension. However, the concept of gating has since been relaxed in popular usage to simply mean any multiplicative interaction (often with an activation function). For example, elementwise multiplicative components of neural network architectures (that do not interact along sequence length) are now commonly referred to as gated architectures (Hua et al. 2022; Mehta et al. 2023), despite a very diï¬ erent meaning than the original RNN sense. Thus we believe the original concept of RNN gating versus the popular usage of multiplicative gating actually have a very diï¬ erent semantic meaning. Hypernetworks. Hypernetworks refer to neural networks whose parameters are themselves generated by smaller neural networks.
2312.00752#89
2312.00752#91
2312.00752
[ "2302.13971" ]
2312.00752#91
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
The original idea (Ha, Dai, and Quoc V. Le 2017) used it in a narrow sense to deï¬ ne a large RNN whose recurrent parameters are generated by a smaller RNN. Data-dependence. Similar to hypernetworks, data-dependence can refer to any notion where some parameters of the model depend on the data (Poli et al. 2023). Example: GLU Activation. To illustrate the issues with these concepts, consider a simple diagonal linear layer ð ¦ = Dð ¥, where D is a diagonal weight parameter. Now suppose that D is itself generated from a linear transformation of ð ¥, with an optional nonlinearity: D = ð (W ð ¥). Since it is diagonal, the multiplication becomes an elementwise product: ð ¦ = ð (W ð ¥)â ¦ð ¥. This is a rather trivial transformation, yet it technically satisï¬ es the common meanings of gating (since it has a multiplicative â branchâ ), hypernetworks (since the parameter D is generated by another layer), and data-dependent (since D depends on the data ð ¥). However, this in fact simply deï¬ nes a GLU function, which is so simple that it is often considered just an activation function (Dauphin et al. 2017; Shazeer 2020) instead of a meaningful layer.
2312.00752#90
2312.00752#92
2312.00752
[ "2302.13971" ]
2312.00752#92
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Selection. Thus, while selection mechanisms could be considered a special case of ideas such as architectural gating, hypernetworks, or data-dependence, so can an enormous range of other constructionsâ essentially anything with a multiplication, including standard attention mechanisms (Bahdanau, Cho, and Bengio 2015; Vaswani et al. 2017) as wellâ and we ï¬ nd it uninformative to think of them as such. Instead, we view it as most closely related to the gating mechanism of traditional RNNs, which is a special case (Theorem 1) and also has a deeper history of connections to SSMs through variable (input-dependent) discretization of â (Funahashi and Nakamura 1993; Gu, Dao, et al. 2020; Tallec and Ollivier 2018). We also eschew the term â gatingâ in favor of selection to clarify the overloaded use of former. More narrowly, we use selection to refer to the mechanistic action of a model to select or ignore inputs and facilitate data interaction along the sequence length (Section 3.1). Beyond selective SSMs and gated RNNs, other examples may include input-dependent convolutions (Kosma, Nikolentzos, and Vazirgiannis 2023; Lioutas and Guo 2020; Lutati, Zimerman, and Wolf 2023; Yang et al. 2019) and even attention.
2312.00752#91
2312.00752#93
2312.00752
[ "2302.13971" ]
2312.00752#93
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
24 # B Related Work We overview several prior works related to our methods. We mention that some of the most closely related models include recurrent layers such as S4, S5, and quasi-RNNs; as well as end-to-end architectures such as H3, RetNet, and RWKV. # B.1 S4 Variants and Derivatives We describe a brief overview of some structured SSMs from past work, particularly those that have a relation to our method. â ¢ S4 (Gu, Goel, and Ré 2022; Gu, Johnson, Goel, et al. 2021) introduced the ï¬ rst structured SSM, describing diagonal structure and diagonal plus low-rank (DPLR). It focused on eï¬ cient convolutional algorithms for DPLR SSMs due to a connection to continuous-time online memorization (HIPPO) (Gu, Dao, et al. 2020). â ¢ DSS (Gupta, Gu, and Berant 2022) ï¬ rst discovered the empirical eï¬ ectiveness of diagonal structured SSMs by approximating the HIPPO initialization. This was expanded on theoretically in S4D (Gu, Gupta, et al. 2022).
2312.00752#92
2312.00752#94
2312.00752
[ "2302.13971" ]
2312.00752#94
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
â ¢ S5 (Smith, Warrington, and Linderman 2023) independently discovered the diagonal SSM approximation, and is the ï¬ rst S4 model to be computed recurrently with the parallel scan. However, this required lowering the eï¬ ective state dimension, which they accomplished by switching the SSM dimensions from a SISO (single-input single-output) to MIMO (multi-input multi-output) formulation. Our proposed S6 shares the scan, but diï¬ ers by (i) keeping the SISO dimensions, which provides a larger eï¬ ective recurrent state, (ii) using a hardware-aware algorithm to overcome the computation issue, (iii) adding the selection mechanism. Lu et al. (2023) applied S5 to meta-RL in order to handle resetting the SSM state between episode trajectories. Their mechanism can be viewed as a particular hard-coded instance of a selection mechanism, where A is manually set to 0, instead of our learnable mechanism that depends on the input. It would be interesting to apply selective SSMs generically to this setting and probe if the model has learned to automatically reset its state on episode boundaries.
2312.00752#93
2312.00752#95
2312.00752
[ "2302.13971" ]
2312.00752#95
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
â ¢ Mega (Ma et al. 2023) introduced a simpliï¬ cation of S4 to be real- instead of complex- valued, giving it an interpretation of being an exponential moving average (EMA). They additionally make an interesting connection of the discretization step of SSMs to an EMA damping term. Contrary to ï¬ ndings in the original S4 papers, this was the ï¬ rst model to show that real-valued SSMs are empirically eï¬ ective in certain settings or when combined with diï¬ erent architectural components. â ¢ Liquid S4 (Hasani et al. 2023) is also motivated by augmenting S4 with an input-dependent state transition. From this perspective it shares similarity to selection mechanisms, although in a limited form which is still computed convolutionally and close to LTI. â ¢ SGConv (Y. Li et al. 2023), Hyena (Poli et al. 2023), LongConv (Fu et al. 2023), MultiresConv (J. Shi, K. A. Wang, and Fox 2023), and Toeplitz Neural Network (Qin, Han, W. Sun, He, et al. 2023) all focus on the convolutional representation of S4 and create global or long convolution kernels with diï¬ erent parameterizations. However, these methods cannot do fast autoregressive inference directly. Notably, all of these methods, and all other structured SSMs that we are aware of, have been non-selective and usually strictly LTI (linear time invariant). # B.2 SSM Architectures We use SSM architectures or state space neural networks (SSNN) to refer to deep neural network architectures incorporating one of the previous SSMs as a black box layer.
2312.00752#94
2312.00752#96
2312.00752
[ "2302.13971" ]
2312.00752#96
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
â ¢ GSS (Mehta et al. 2023) was the ï¬ rst gated neural network architecture incorporating SSMs. It is motivated by the gated attention unit (GAU) of Hua et al. (2022) and looks quite similar to our block, except with additional projections. Most importantly, its projection contracts the model dimension to reduce the state size of the SSM, while ours expands the model dimension in order to increase the state size, based on the motivation in Section 3.1.
2312.00752#95
2312.00752#97
2312.00752
[ "2302.13971" ]
2312.00752#97
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
25 â ¢ Mega (Ma et al. 2023) combined the EMA simpliï¬ cation of S4 described above into a hybrid architecture using an eï¬ cient attention approximation. â ¢ H3 (Dao, Fu, Saab, et al. 2023) is motivated by combining S4 with linear attention (Katharopoulos et al. 2020). It is the ï¬ rst to generalize this formulation of linear attention to more general recurrences, which is also the basis of later architectures. â ¢ Selective S4 (J. Wang et al. 2023) incorporates S4 as a black box to generate a binary mask which is multiplied on the input. While sharing the â selectionâ name, we consider this an architectural modiï¬ cation that is closer to architectural gating than a selection mechanism (Appendix A). For example, we hypothesize that it would not solve the Selective Copying task because simply masking out the irrelevant inputs does not aï¬ ect the spacing between the relevant ones (indeed, the Selective Copying task can even be viewed as coming pre-masked if the noise tokens are embedded to 0).
2312.00752#96
2312.00752#98
2312.00752
[ "2302.13971" ]
2312.00752#98
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
â ¢ RetNet (Y. Sun et al. 2023) is also based on Linear Attention and very similar to H3, but reduces the inner S4 layer to a special case where the state dimension is ð = 1. Although not framed as such, its recurrence can be viewed as a special case of a linear SSM. Its primary source of improvement is using a linear attention with large head dimension, which can be viewed as another method to perform input-dependent state expansion. Using a larger head dimension in the context of linear attention variants was ï¬ rst done by H3, but not extensively used since this requires a proportional amount of extra computation. RetNet avoids this with an alternate way to parallelize the computation with a variant of standard multi-head attention instead of convolutions, made feasible by their particular special case of SSMs which acts as a simple EMA.
2312.00752#97
2312.00752#99
2312.00752
[ "2302.13971" ]
2312.00752#99
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
â ¢ RWKV (B. Peng et al. 2023) is another recent RNN designed for language modeling. It is based on AFT (attention-free Transformer (S. Zhai et al. 2021)), another variant of linear attention. Its main â WKVâ mechanism involves LTI recurrences and can be seen as the ratio of two SSMs. We also highlight the gated attention unit (GAU) from Hua et al. (2022), which was motivated by combining the Transformerâ s MHA and MLP blocks together and was an inspiration for our architecture (Section 3.4) combining the H3 and MLP blocks. # B.3 Relationship to RNNs RNNs and SSMs are broadly related, as they both involve the concepts of recurrence on a latent state. Several older RNNs such as the strongly typed RNN (Balduzzi and Ghifary 2016), quasi-RNN (QRNN) (Bradbury et al. 2016), and simple recurrent unit (SRU) (Lei 2021; Lei et al. 2017) involve forms of gated RNNs without time-wise nonlinearities. Because of the connections of gating mechanisms and selection mechanisms, these can be viewed as cases of selective SSMs, and are thus more powerful in a sense than the family of LTI structured SSMs above.
2312.00752#98
2312.00752#100
2312.00752
[ "2302.13971" ]
2312.00752#100
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
The main diï¬ erences are: â ¢ They do not use state expansion (ð = 1) or selective B, C parameters, both of which are important for performance (Section 4.6). â ¢ They use a heuristic gating mechanism, which we generalize as a consequence of the selection mechanism + discretization (Theorem 1). The connections to principled SSM theory provides better parameterizations and initializations (Section 3.6). Additionally, older RNNs famously suï¬ ered from eï¬ ciency issues and the vanishing gradients problem (Pascanu, Mikolov, and Bengio 2013), both caused by their sequential nature. The latter could be solved for some of the above RNNs by leveraging the parallel scan (Martin and Cundy 2018), but the former was diï¬ cult without theory later developed for SSMs. For example, modern structured SSMs diï¬ er in more careful parameterization of the recurrent dynamics inspired by classical SSM theory (e.g. through discretization (Gu, Johnson, Goel, et al. 2021; Gu, Johnson, Timalsina, et al. 2023)), or direct analysis (Orvieto et al. 2023)). We also note that there is a long line of work on orthogonal RNNs (Arjovsky, Shah, and Bengio 2016; Henaï¬ , Szlam, and LeCun 2016; Lezcano-Casado and Martà nez-Rubio 2019; Mhammedi et al. 2017; Vorontsov et al. 2017)
2312.00752#99
2312.00752#101
2312.00752
[ "2302.13971" ]
2312.00752#101
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
26 which are motivated by constraining the A transition matrix to be orthogonal or unitary, in order to control its eigenvalues and prevent the vanishing gradient problem. However, these had other limitations; we believe that these stem from the fact that orthogonal/unitary RNNs are also LTI. For example, they are almost always evaluated on the Copying task which they can solve perfectly, but observed to struggle on the Selective Copying task (Jing et al. 2019). # B.4 Linear Attention The Linear Attention (LA) (Katharopoulos et al. 2020) framework is an important result popularizing kernel attention and showing how it relates to recurrent autoregressive models. Many variants have proposed alternative kernels and other modiï¬
2312.00752#100
2312.00752#102
2312.00752
[ "2302.13971" ]
2312.00752#102
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
cations. Random Feature Attention (RFA) (H. Peng et al. 2021) chooses the kernel feature map to approximate softmax attention (i.e. the exp feature map) using the random Fourier feature approximation of Gaussian kernels (Rahimi and Recht 2007). Performer (Choromanski et al. 2021) ï¬ nds an approximation to the exponential kernel involving only positive features, which also allows the softmax normalization term. TransNormer (Qin, Han, W. Sun, D. Li, et al. 2022) showed that the LA denominator term can be unstable and proposed replacing it with a LayerNorm. cosFormer (Qin, W. Sun, et al. 2022) augments RFA with a cosine reweighting mechanism that incorporates positional information to emphasize locality. Linear Randomized Attention (Zheng, C. Wang, and L. Kong 2022) generalize RFA from the perspective of importance sampling, and generalize it to provide better estimates of the full softmax kernel (rather than just the exp-transformed numerator). Aside from kernel attention, many other variants of eï¬ cient attention exist; the survey Tay, Dehghani, Bahri, et al. (2022) oï¬ ers an extensive categorization of many of these. # B.5 Long Context Models Long context has become a popular subject, and several recent models have claimed to scale to longer and longer sequences. However, these are often from a computational standpoint and have not been extensively validated. These include: â
2312.00752#101
2312.00752#103
2312.00752
[ "2302.13971" ]
2312.00752#103
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
¢ Recurrent Memory Transformer (Bulatov, Kuratov, and Burtsev 2023), a lightweight wrapper around a Transformer backbone. It showed ability to generalize up to 1M sequences but only on synthetic memorization tasks; their main result is similar to our Induction Heads extrapolation experiment (Table 2). â ¢ LongNet (Ding et al. 2023), which claimed to scale to 1B length but only evaluated on length < 100ð ¾ for actual tasks. â ¢ Hyena and HyenaDNA (Nguyen, Poli, et al. 2023; Poli et al. 2023), which claimed to leverage up to 1M context. However, their experiments trained on proportionally more data at longer contexts, making it hard to conclude if quality improvements at 1M context are due to context length or due to more data and computation.
2312.00752#102
2312.00752#104
2312.00752
[ "2302.13971" ]
2312.00752#104
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
â ¢ Sparse Transformer (Child et al. 2019) showed a proof-of-concept of using a strided sparse attention Transformer to model audio waveforms of length 220 = 1048576, although did not discuss performance tradeoï¬ s when controlling for computation and model size. In contrast, we believe this work presents one of the ï¬ rst approaches to meaningfully demonstrate increasing performance with longer context. # C Mechanics of Selective SSMs Proof of Theorem 1. Consider a selective SSM (Algorithm 2) with ð = 1, A = â 1, B = 1, ð â = ð «ð ð ð ¾ð ºð (ð ¥), ð â = ð ð ð ¿ð ð ð ð ð . The corresponding continuous-time SSM (1) is â (ð ¡) = â â (ð ¡) + ð ¥(ð ¡) which is also called a leaky integrator. 27 The discretization step size is The discretization step size is # â ð ¡ = ð â (ð ¯ð ºð ð ºð ð ¾ð ð ¾ð + ð â (ð ¥ð ¡)) = ð ð ð ¿ð ð ð ð ð (ð ¯ð ºð ð ºð ð ¾ð ð ¾ð + ð «ð ð ð ¾ð ºð (ð ¥ð ¡)) = ð ð ð ¿ð ð ð ð ð (ð «ð ð ð ¾ð ºð (ð ¥ð ¡))
2312.00752#103
2312.00752#105
2312.00752
[ "2302.13971" ]
2312.00752#105
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
where we observe that the parameter can be viewed as a learnable bias and folded into the linear projection. Now applying the zero-order hold (ZOH) discretization formulas: Að ¡ = exp(â A) = 1 1 + exp(ð «ð ð ð ¾ð ºð (ð ¥ð ¡) = ð (â ð «ð ð ð ¾ð ºð (ð ¥ð ¡)) = 1 â ð (ð «ð ð ð ¾ð ºð (ð ¥ð ¡)) Bð ¡ = (â A)â 1(exp(â A) â I) â â B = â (exp(â A) â I) = 1 â A = ð (ð «ð ð ð ¾ð ºð (ð ¥ð ¡)). Thus the final discrete recurrence (2a) is ð ð ¡ = ð (ð «ð ð ð ¾ð ºð (ð ¥ð ¡)) â ð ¡ = (1 â ð ð ¡)â ð ¡â 1 + ð ð ¡ð ¥ð ¡ as desired. # D Hardware-aware Algorithm For Selective SSMs Without input-dependent selectivity, SSMs can be eï¬ ciently implemented as a convolution (Dao, Fu, Saab, et al. 2023; Gu, Goel, and Ré 2022), which leverages the fast Fourier transform (FFT) as primitive. With selectivity, SSMs are no-longer equivalent to convolution, but we leverage the parallel associative scan. While SSM scans are theoretically eï¬ cient (ð (ð µð ¿ð ·ð ) FLOPs, scaling linear in ð ¿), training foundation models with selective SSMs requires them to be eï¬ cient on modern hardware (GPUs) as well. We describe how we use kernel fusion and recomputation to make SSM scan fast and memory-eï¬ cient. We evaluate the speed of our scan implementation compared to convolution and attention in Section 4.5, showing that it is up to 7à times faster than attention at sequence length 32K, and is as memory-eï¬ cient as the best attention implementation (FlashAttention). Speed.
2312.00752#104
2312.00752#106
2312.00752
[ "2302.13971" ]
2312.00752#106
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
On modern hardware accelerators (GPUs) most operations (except matrix multiply) are bounded by memory-bandwidth (Dao, Fu, Ermon, et al. 2022; Ivanov et al. 2021; Williams, Waterman, and Patterson 2009). This the case with our scan operation, and we use kernel fusion to reduce the amount of memory IOs, leading to signiï¬ cant speedup compared to a standard implementation. The standard way to implement the scan algorithm in Section 3.2 is to prepare the scan input A, B of size (ð µ, ð ¿, ð ·, ð ) in GPU HBM (high-bandwidth memory, commonly referred to as GPU memory), call a parallel associative scan implementation to write the scan output of size (ð µ, ð ¿, ð ·, ð ) to GPU HBM, then multiply that scan output with C to produce an output of size (ð µ, ð ¿, ð ·). However, this requires the number of memory reads/writes on the order of ð (ð µð ¿ð ·ð ). We can instead fuse the discretization step, the scan, and the multiplication with C into one kernel: 1. We read in ð (ð µð ¿ð · + ð ·ð ) bytes of memory (â , A, B, C) from slow HBM to fast SRAM. 2. We discretize to produce A, B of size (ð µ, ð ¿, ð ·, ð ) in SRAM. 3. We perform a parallel associative scan, yielding intermediate states of size (ð µ, ð ¿, ð ·, ð ) in SRAM. 4. We multiply and sum with C, producing outputs of size (ð µ, ð ¿, ð ·) and write it to HBM. This way, we reduce IOs by a factor of ð (ð ) (the state dimension), which in practice speeds up the operation by 20-40 times (Section 4.5). 28 Table 11: (Induction heads.) Models are trained on sequence length 2° = 256, and tested on various sequence lengths of 2° = 64 up to 2° = 1048576. Y denotes perfect generalization accuracy, while X denotes out of memory.
2312.00752#105
2312.00752#107
2312.00752
[ "2302.13971" ]
2312.00752#107
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Model Params Test Accuracy (%) at Sequence Length 26 7 28 29 210 gl 212 913 214915216 917918919920 MHA-Abs 137K v 99.6 100.0 58.6 266 188 98 10.9 7.8 X x x x x x MHA-RoPE = 137K v v 100.0 83.6 31.3 184 8.6 9.0 5.5 xX x x x x x MHA-xPos 137K v v 100.0 99.6 67.6 254 7.0 9.0 78 =X x x x x x H3 153K v v 100.0 80.9 39.5 238 148 82 59 66 82 47 82 63 74 Hyena 69M* 977 Vo 100.0 Vv 441 125 66 5.1 70 #59 66 66 59 63 98 Mamba 74K v v 100.0 Vv v v v v v v v v v v v â Most of the parameters are in learnable positional encodings. For sequence length ð ¿ too long where we cannot ï¬ t the sequence in SRAM (which is much smaller than HBM), we split the sequences into chunks and perform the fused scan on each chunk. As long as we have the intermediate scan states, we can continue the scan with the next chunk. Memory. We describe how we use the classical technique of recomputation to reduce the total amount of memory required to train selective SSM layers. From the way we fuse the forward pass, we do not save the intermediate states of size (ð µ, ð ¿, ð ·, ð
2312.00752#106
2312.00752#108
2312.00752
[ "2302.13971" ]
2312.00752#108
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
) to avoid memory blowup. However, these intermediate states are necessary for the backward pass to compute gradients. We instead recompute those intermediate states in the backward pass. Since the inputs â , A, B, C and output gradient read from HBM to SRAM are of size ð (ð µð ¿ð + ð ·ð ), and the input gradients are also of size ð (ð µð ¿ð + ð ·ð ), recomputation avoids the cost of reading ð (ð µð ¿ð ð ·) elements from HBM. This means that recomputation of the SSM states in the backward pass speeds up the computation compared to storing them and reading them from HBM. Beyond optimizing for the memory requirement of just the scan operation, we also use recomputation to optimize the memory requirement of the entire selective SSM block (input projection, convolution, activation, scan, output projection). In particular, we do not save intermediate activations that take a lot of memory but are fast to recompute (e.g. output of activation function or short convolution). As a result, the selective SSM layer has the same memory requirement as an optimized Transformer implementation with FlashAttention. In particular, each attention layer (FlashAttention) stores around 12 bytes of activations per token, an each MLP layer stores around 20 bytes of activations per token, for a total of 32 bytes ((assuming mixed-precision training in FP16 or BF16)). Each selective SSM stores around 16 bytes of activations per token. Hence two layers of selective SSMs have around the same activation memory as an attention layer and an MLP layer. # E Experimental Details and Additional Results # E.1 Synthetic Tasks Selective Copying. Our setting is on sequences of length 4096, with a vocab size of 16 possible tokens (including the white â noiseâ token from Figure 2) and requiring models to memorize 16 â dataâ tokens.
2312.00752#107
2312.00752#109
2312.00752
[ "2302.13971" ]
2312.00752#109
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
We use 2 layer models with a model dimension of ð · = 64. Models are trained for 400K steps at a constant learning rate of 0.0001 with a batch size of 64. Induction Heads. Training consists of randomly generating data every step, with a batch size of 8. We choose an â epochâ size of 8192 steps, and track the accuracy on ï¬ xed validation sets (also randomly generated) of each target sequence length. For the MHA-Abs and Mamba models, results are reported after the 25th epoch (8192 à 25 = 204800 steps). For the MHA-RoPE and MHA-xPos models, results are reported after the 50th epoch (8192 à 50 = 409600 steps). For the LTI H3 and Hyena models, results are reported after the 10th epoch (81920 steps) because they had converged by then and failed to improve further. 29 Table 12: (Scaling Law Model Sizes.) Our model sizes and hyperparameters for scaling experiments. (Model dimension and number of heads applies only to Transformer models.) Params ð _ð ð ð ¢ð ð ð ð _ð ð ð ð ð ð _ð ð ð ð ð / ð _ð ð ð ð
2312.00752#108
2312.00752#110
2312.00752
[ "2302.13971" ]
2312.00752#110
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Training steps Learning Rate Batch Size Tokens 125M 350M 760M 1.3B 12 24 24 24 768 1024 1536 2048 12 / 64 16 / 64 16 / 96 32 / 64 4800 13500 29000 50000 6e-4 3e-4 2.5e-4 2e-4 0.5M tokens 0.5M tokens 0.5M tokens 0.5M tokens 2.5B 7B 15B 26B We use the Adam optimizer with no weight decay. All models are trained at constant learning rates 2ð
2312.00752#109
2312.00752#111
2312.00752
[ "2302.13971" ]
2312.00752#111
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
â 4 and 1ð â 3, and the better results are reported for each model (2ð â 4 for all models except Mamba). The attention and Hyena models did not learn at LR 1ð â 3. H3 learned at both LRs, but interestingly generalized better to shorter sequences at the smaller LR of 2ð â 4. Mamba learned at both LRs, but extrapolated better at the larger LR of 1ð â 3. # E.2 Language Modeling # E.2.1 Scaling Law Details All models were trained on the Pile. Model Sizes. Table 12 speciï¬ es the model sizes we use for scaling laws. This is taken directly from the GPT3 speciï¬ cations (Brown et al. 2020), with very minor modiï¬ cations.
2312.00752#110
2312.00752#112
2312.00752
[ "2302.13971" ]
2312.00752#112
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
First, we changed the batch size of the 1.3B model from 1M tokens to 0.5M tokens, since we did not use enough parallelization to require the larger batch size. Second, we changed the number of training steps and total tokens to roughly match Chinchilla scaling laws (Hoï¬ mann et al. 2022), which specify that training tokens should increase proportionally to model size. Training Recipes. All models used the AdamW optimizer with â ¢ gradient clip value 1.0 â ¢ weight decay 0.1 no dropout linear learning rate warmup with cosine decay By default, the peak learning rate is the GPT3 speciï¬
2312.00752#111
2312.00752#113
2312.00752
[ "2302.13971" ]
2312.00752#113
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
cation. We give several models an â improved recipeâ , inspired by changes adopted by popular large language models such as PaLM (Chowdhery et al. 2023) and LLaMa (Touvron et al. 2023). These include: â ¢ linear learning rate warmup with cosine decay to 1ð â 5, with a peak value of 5Ã the GPT3 value no linear bias terms RMSNorm instead of LayerNorm â ¢ AdamW hyperparameter ð ½ = (.9, .95) (the GPT3 value) instead of the PyTorch default of ð ½ = (.9, .999)
2312.00752#112
2312.00752#114
2312.00752
[ "2302.13971" ]