Edit model card

mamba-2.8b-GGUF

Quantized mamba-2.8b models using recent versions of llama.cpp.

Downloads last month
305
GGUF
Model size
2.77B params
Architecture
mamba
Inference API (serverless) has been turned off for this model.

Quantized from