metadata
base_model: axolotl-ai-co/romulus-mistral-nemo-12b-simpo
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
axolotl-ai-co/romulus-mistral-nemo-12b-simpo GGUF Quantizations π
Optimized GGUF quantization files for enhanced model performance
Powered by Featherless AI - run any model you'd like for a simple small fee.
Available Quantizations π
Quantization Type | File | Size |
---|---|---|
IQ4_XS | axolotl-ai-co-romulus-mistral-nemo-12b-simpo-IQ4_XS.gguf | 6485.04 MB |
Q2_K | axolotl-ai-co-romulus-mistral-nemo-12b-simpo-Q2_K.gguf | 4569.10 MB |
Q3_K_L | axolotl-ai-co-romulus-mistral-nemo-12b-simpo-Q3_K_L.gguf | 6257.54 MB |
Q3_K_M | axolotl-ai-co-romulus-mistral-nemo-12b-simpo-Q3_K_M.gguf | 5801.29 MB |
Q3_K_S | axolotl-ai-co-romulus-mistral-nemo-12b-simpo-Q3_K_S.gguf | 5277.85 MB |
Q4_K_M | axolotl-ai-co-romulus-mistral-nemo-12b-simpo-Q4_K_M.gguf | 7130.82 MB |
Q4_K_S | axolotl-ai-co-romulus-mistral-nemo-12b-simpo-Q4_K_S.gguf | 6790.35 MB |
Q5_K_M | axolotl-ai-co-romulus-mistral-nemo-12b-simpo-Q5_K_M.gguf | 8323.32 MB |
Q5_K_S | axolotl-ai-co-romulus-mistral-nemo-12b-simpo-Q5_K_S.gguf | 8124.10 MB |
Q6_K | axolotl-ai-co-romulus-mistral-nemo-12b-simpo-Q6_K.gguf | 9590.35 MB |
Q8_0 | axolotl-ai-co-romulus-mistral-nemo-12b-simpo-Q8_0.gguf | 12419.10 MB |
β‘ Powered by Featherless AI
Key Features
- π₯ Instant Hosting - Deploy any Llama model on HuggingFace instantly
- π οΈ Zero Infrastructure - No server setup or maintenance required
- π Vast Compatibility - Support for 2400+ models and counting
- π Affordable Pricing - Starting at just $10/month
Links:
Get Started | Documentation | Models