|
--- |
|
language: |
|
- en |
|
license: cc-by-4.0 |
|
--- |
|
|
|
These are GGUF quantized versions of [FoxEngineAi/Mega-Destroyer-8x7B](https://huggingface.co/FoxEngineAi/Mega-Destroyer-8x7B). |
|
|
|
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using `wiki.train.raw`. |
|
|
|
Some model files above 50GB are split into smaller files. To concatenate them, use the `cat` command (on Windows, use PowerShell): `cat foo-Q6_K.gguf.* > foo-Q6_K.gguf` |
|
|
|
* What quant do I need? See https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 |
|
* Quant requests? Just open a discussion in the community tabs. |