Edit model card

An Empirical Study of Mamba-based Language Models

Documentation   Paper   Models

Overview

We release the 8B-parameter Mamba-2 and Mamba-2-Hybrid model (made of Mamba-2, attention, and MLP layers) trained for the paper An Empirical Study of Mamba-based Language Models.. These models were trained for 3.5T tokens with a sequence length of 4K. These models can be compared to the released 8B-parameter Transformer trained on the same data with the same hyperparameters. We also release the 32K and 128K long-context extensions of Mamba-2-Hybrid.

Model Version(s)

mamba2-hybrid-8b-3t-128k: 8B-parameter Mamba-2-Hybrid model trained on 3.5T tokens extended to support 128K sequence lengths through continued pretraining on 50B tokens.

Toolkit

Megatron-LM Framework

Citations

See more details in our paper:

An Empirical Study of Mamba-based Language Models.

Roger Waleffe, Wonmin Byeon, Duncan Riach, Brandon Norick, Vijay Korthikanti, Tri Dao, Albert Gu, Ali Hatamizadeh, Sudhakar Singh, Deepak Narayanan, Garvit Kulshreshtha, Vartika Singh, Jared Casper, Jan Kautz, Mohammad Shoeybi, Bryan Catanzaro. (2024)

Please cite the paper as follows if you use the models from this repo:

@article{waleffe2024anempirical,
    title   = {An Empirical Study of Mamba-based Language Models},
    author  = {Roger Waleffe and Wonmin Byeon and Duncan Riach and Brandon Norick and Vijay Korthikanti and Tri Dao and Albert Gu and Ali Hatamizadeh and Sudhakar Singh and Deepak Narayanan and Garvit Kulshreshtha and Vartika Singh and Jared Casper and Jan Kautz and Mohammad Shoeybi and Bryan Catanzaro},
    year    = {2024},
    journal = {arXiv preprint arXiv: 2406.07887}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) does not yet support Megatron-LM models for this pipeline type.

Collection including nvidia/mamba2-hybrid-8b-3t-128k