language: | |
- en | |
license: apache-2.0 | |
These are GGUF quantized versions of [perlthoughts/Mistral-11B-Instruct-v0.2](https://huggingface.co/perlthoughts/Mistral-11B-Instruct-v0.2). | |
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using `wiki.train.raw`. | |
Some model files above 50GB are split into smaller files. To concatenate them, use the `cat` command (on Windows, use PowerShell): `cat foo-Q6_K.gguf.* > foo-Q6_K.gguf` |