FalconMamba 7B
This collection features the FalconMamba 7B base model, the instruction-tuned version, their 4-bit and GGUF variants, and the demo.
- Running on Zero64🐍
Falcon Mamba: The First Competitive Attention-free 7B Language Model
Paper • 2410.05355 • Published • 31Note FalconMamba technical report
tiiuae/falcon-mamba-7b
Text Generation • Updated • 12.3k • 221Note First strong attention free model for general purpose usage, based on Mamba1 architecture
tiiuae/falcon-mamba-7b-instruct
Text Generation • Updated • 12.8k • 67Note FalconMamba-7B fine-tuned on instruction data, for chat-like interaction with the model
tiiuae/falcon-mamba-7b-4bit
Text Generation • Updated • 59 • 11Note FalconMamba-7B quantized in 4bit precision using `bitsandbytes` library for lighter memory requirements and smaller GPU hardwares
tiiuae/falcon-mamba-7b-instruct-4bit
Updated • 161 • 12Note FalconMamba-7B-instruct quantized in 4bit precision using `bitsandbytes` library for lighter memory requirements and smaller GPU hardwares
tiiuae/falcon-mamba-7b-instruct-BF16-GGUF
Updated • 48 • 1Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in BF16 format
tiiuae/falcon-mamba-7b-instruct-F16-GGUF
Updated • 59 • 1Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in F16 format
tiiuae/falcon-mamba-7b-instruct-Q8_0-GGUF
Updated • 47 • 5Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in quantized Q8_0 format
tiiuae/falcon-mamba-7b-instruct-Q4_K_M-GGUF
Updated • 122 • 5Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in quantized Q4_K_M format
tiiuae/falcon-mamba-7b-BF16-GGUF
Updated • 109 • 2Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in BF16 format
tiiuae/falcon-mamba-7b-F16-GGUF
Updated • 50 • 1Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in F16 format
tiiuae/falcon-mamba-7b-Q8_0-GGUF
Updated • 23 • 2Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in quantized Q8_0 format
tiiuae/falcon-mamba-7b-Q4_K_M-GGUF
Updated • 14 • 1Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in quantized Q4_K_M format
tiiuae/falcon-mamba-7b-pre-decay
Updated • 88 • 3Note Pre-decay stage checkpoint useful for continuous pretraining