Edit model card

Llamacpp Quantizations of mamba-2.8b-hf

Using llama.cpp release b2536 for quantization.

Original model: https://huggingface.co/state-spaces/mamba-2.8b-hf

Download a file (not the whole branch) from below:

Filename Quant type File Size Description
mamba-2.8b-hf-Q8_0.gguf Q8_0 3.30GB Extremely high quality, generally unneeded but max available quant.
mamba-2.8b-hf-Q6_K.gguf Q6_K 2.66GB Very high quality, near perfect, recommended.
mamba-2.8b-hf-Q5_K_M.gguf Q5_K_M 2.32GB High quality, very usable.
mamba-2.8b-hf-Q5_K_S.gguf Q5_K_S 2.32GB High quality, very usable.
mamba-2.8b-hf-Q5_0.gguf Q5_0 2.32GB High quality, older format, generally not recommended.
mamba-2.8b-hf-Q4_K_M.gguf Q4_K_M 2.01GB Good quality, uses about 4.83 bits per weight.
mamba-2.8b-hf-Q4_K_S.gguf Q4_K_S 2.01GB Slightly lower quality with small space savings.
mamba-2.8b-hf-IQ4_NL.gguf IQ4_NL 2.01GB Decent quality, similar to Q4_K_S, new method of quanting,
mamba-2.8b-hf-IQ4_XS.gguf IQ4_XS 1.93GB Decent quality, new method with similar performance to Q4.
mamba-2.8b-hf-Q4_0.gguf Q4_0 2.01GB Decent quality, older format, generally not recommended.
mamba-2.8b-hf-Q3_K_L.gguf Q3_K_L 1.68GB Lower quality but usable, good for low RAM availability.
mamba-2.8b-hf-Q3_K_M.gguf Q3_K_M 1.68GB Even lower quality.
mamba-2.8b-hf-IQ3_M.gguf IQ3_M 1.68GB Medium-low quality, new method with decent performance.
mamba-2.8b-hf-IQ3_S.gguf IQ3_S 1.68GB Lower quality, new method with decent performance, recommended over Q3 quants.
mamba-2.8b-hf-Q3_K_S.gguf Q3_K_S 1.68GB Low quality, not recommended.
mamba-2.8b-hf-Q2_K.gguf Q2_K 1.42GB Extremely low quality, not recommended.

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

Downloads last month
253
GGUF
Model size
2.77B params
Architecture
mamba

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) is not available, repository is disabled.

Space using bartowski/mamba-2.8b-hf-GGUF 1