--- license: apache-2.0 pipeline_tag: text-generation library_name: gguf --- GGUF importance matrix (imatrix) quants for https://huggingface.co/abacusai/Smaug-Mixtral-v0.1 The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384). **NOTE**: The new IQ3_M/IQ3_S/Q3_K_XS quants are currently causing a segfault during quantization, so I'll upload them once llama.cpp gets fixed. | Layers | Context | [Template](https://huggingface.co/abacusai/Smaug-Mixtral-v0.1/blob/main/tokenizer_config.json#L32) | | --- | --- | --- | |
32
|
32768
|
\[INST] {prompt} [/INST]
{response}
|