Text Generation
PyTorch
Safetensors
English
openlm
mamba
linear
Eval Results
mamba-7b-rw / README.md
sedrickkeh's picture
Update README.md
443ad2e verified
|
raw
history blame
6.69 kB
metadata
license: apache-2.0
datasets:
  - tiiuae/falcon-refinedweb
pipeline_tag: text-generation
library_name: openlm
tags:
  - mamba
  - linear
language:
  - en
model-index:
  - name: mamba-7b
    results:
      - task:
          type: text-generation
        dataset:
          type: MMLU
          name: MMLU
        metrics:
          - name: accuracy
            type: accuracy
            value: 33.3
            verified: false
      - task:
          type: text-generation
        dataset:
          type: HellaSwag
          name: HellaSwag
        metrics:
          - name: accuracy
            type: accuracy
            value: 77.9
            verified: false
      - task:
          type: text-generation
        dataset:
          type: PIQA
          name: PIQA
        metrics:
          - name: accuracy
            type: accuracy
            value: 81
            verified: false
      - task:
          type: text-generation
        dataset:
          type: Winogrande
          name: Winogrande
        metrics:
          - name: accuracy
            type: accuracy
            value: 71.8
            verified: false
      - task:
          type: text-generation
        dataset:
          type: ai2_arc
          name: ARC-E
        metrics:
          - name: accuracy
            type: accuracy
            value: 77.5
            verified: false
      - task:
          type: text-generation
        dataset:
          type: ai2_arc
          name: ARC-C
        metrics:
          - name: accuracy
            type: accuracy
            value: 46.7
            verified: false

Mamba-7B

(insert cool midjourney pic here?)
This is a 7B parameter model with the Mamba architecture, trained on 1.2T tokens of the RefinedWeb dataset. Mamba is a state-space model that does not use self-attention unlike the standard transformer architecture. It has shown strong performance on various natural language benchmarks. To date, the largest publicly released pure-Mamba pretrain is Mamba-2.8B. We follow their training recipe and release our version of Mamba-7B.

Model Details

Parameters Hidden Size Layers Vocab Size Sequence Length
7B 4096 64 50432 2048

Training Details

  • Mamba-7B was trained using AWS SageMaker on 128 H100 80GB GPUs.
  • Training began in March 2024 and lasted around 3 weeks (some down time due to crashes and loss spikes)
    Hyperparameter Value
    Precision bfloat16
    Optimizer AdamW
    Learning rate 3e-4
    LR cooldown end 1e-5
    QK-norm False
    Warmup steps 2000
    Z-loss 1e-4
    Batch size 2M

Usage

This model was trained using OpenLM. The weights have been converted to be compatible with HuggingFace.

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("tri-ml/mamba-7b-rw")
model = AutoModelForCausalLM.from_pretrained("tri-ml/mamba-7b-rw").cuda()

inputs = tokenizer(["A beautiful flower"], return_tensors="pt")
gen_kwargs = {"max_length": 128, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
output = model.generate(inputs['input_ids'], **gen_kwargs)
output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
print(output)
# <s> A beautiful flower box made of white rose wood. It is a perfect gift for weddings, birthdays and anniversaries.
# All the roses are from our farm Roses Flanders. Therefor you know that these flowers last much longer than those in store or online!</s>

Performance Evaluation

Our evaluations were done using the Eleuther LM Eval Harness repo.

Below we report the performance of Mamba 7B compared to other base models.

HellaSwag PIQA Winogrande ARC-E ARC-C MMLU (5-shot)
Mamba-1.4B 59.0 73.9 61.4 65.5 32.9 25.2
Mamba-2.8B 71.0 78.1 65.9 68.2 41.7 26.2
Llama2-7B 76.0 79.1 69.1 76.3 46.3 45.9
Gemma-7B 80.7 81.9 73.7 81.1 53.2 62.9
Mistral-7B 81.0 82.1 74.0 80.9 53.8 62.4
Mamba-7B 77.9 81.0 71.8 77.5 46.7 33.3

How to Cite

If you use this model, please cite our paper on Linearizing Large Language Models.

@article{Mercat2024Linearizing,
  title={Linearizing Large Language Models},
  author={Jean Mercat and Igor Vasiljevic and Sedrick Keh and Kushal Arora and Achal Dave and Adrien Gaidon and Thomas Kollar},
  journal={ArXiv},
  year={2024},
  volume={},
}

Citations

Mamba

@article{mamba,
  title={Mamba: Linear-Time Sequence Modeling with Selective State Spaces},
  author={Gu, Albert and Dao, Tri},
  journal={arXiv preprint arXiv:2312.00752},
  year={2023}
}

OpenLM

@misc{open_lm,
  author = {Gururangan, Suchin and Wortsman, Mitchell and Gadre, Samir Yitzhak and Dave, Achal and Kilian, Maciej and Shi, Weijia and Mercat, Jean and Smyrnis, Georgios and Ilharco, Gabriel and Jordan, Matt and Heckel, Reinhard and Dimakis, Alex and Farhadi, Ali and Shankar, Vaishaal and Schmidt, Ludwig},
  title = {{open_lm}:  a minimal but performative language modeling (LM) repository},
  year = {2023},
  note = {GitHub repository},
  url = {https://github.com/mlfoundations/open_lm/}
}