|
--- |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
library_name: gguf |
|
--- |
|
* GGUF importance matrix (imatrix) quants for https://huggingface.co/abacusai/Smaug-Mixtral-v0.1 |
|
* The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384). |
|
* The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well. |
|
|
|
**NOTE**: The new IQ3_M/IQ3_S (and updated Q3_K_XS) quants have been added, as well as IQ2_S/IQ2_M. |
|
| Layers | Context | [Template](https://huggingface.co/abacusai/Smaug-Mixtral-v0.1/blob/main/tokenizer_config.json#L32) | |
|
| --- | --- | --- | |
|
| <pre>32</pre> | <pre>32768</pre> | <pre>\<s\>[INST] {prompt} [/INST]<br>{response}</pre> | |
|
|
|
![Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range](https://private-user-images.githubusercontent.com/48489457/307680119-7a86761a-c8c7-4774-af14-f80fcc2a6ed1.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDg5NjQwMzEsIm5iZiI6MTcwODk2MzczMSwicGF0aCI6Ii80ODQ4OTQ1Ny8zMDc2ODAxMTktN2E4Njc2MWEtYzhjNy00Nzc0LWFmMTQtZjgwZmNjMmE2ZWQxLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAyMjYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMjI2VDE2MDg1MVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWZlNGY2YmU4YTE5ZTcwYWQ3NWNiYWE5MTRkYjM5NDkwMmJkZGE2ZTVjYmZkM2VkNzFhODgwZmViZjIxZDYyYjEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.IR01nzkx5c3JSey73rTWyt8W-MYKOuBVhh5ighCkSFM) |